00:00:00.000 Started by upstream project "autotest-per-patch" build number 126232 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.035 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.050 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.085 > git --version # 'git version 2.39.2' 00:00:00.085 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.104 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.104 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.600 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.613 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.627 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.627 > git config core.sparsecheckout # timeout=10 00:00:03.638 > git read-tree -mu HEAD # timeout=10 00:00:03.655 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.674 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.675 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.837 [Pipeline] Start of Pipeline 00:00:03.855 [Pipeline] library 00:00:03.858 Loading library shm_lib@master 00:00:06.886 Library shm_lib@master is cached. Copying from home. 00:00:06.916 [Pipeline] node 00:00:21.922 Still waiting to schedule task 00:00:21.922 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:34.113 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:34.115 [Pipeline] { 00:03:34.126 [Pipeline] catchError 00:03:34.128 [Pipeline] { 00:03:34.146 [Pipeline] wrap 00:03:34.159 [Pipeline] { 00:03:34.172 [Pipeline] stage 00:03:34.174 [Pipeline] { (Prologue) 00:03:34.200 [Pipeline] echo 00:03:34.202 Node: VM-host-SM4 00:03:34.208 [Pipeline] cleanWs 00:03:34.217 [WS-CLEANUP] Deleting project workspace... 00:03:34.217 [WS-CLEANUP] Deferred wipeout is used... 00:03:34.222 [WS-CLEANUP] done 00:03:34.402 [Pipeline] setCustomBuildProperty 00:03:34.498 [Pipeline] httpRequest 00:03:34.523 [Pipeline] echo 00:03:34.525 Sorcerer 10.211.164.101 is alive 00:03:34.532 [Pipeline] httpRequest 00:03:34.536 HttpMethod: GET 00:03:34.537 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:03:34.537 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:03:34.538 Response Code: HTTP/1.1 200 OK 00:03:34.538 Success: Status code 200 is in the accepted range: 200,404 00:03:34.539 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:03:34.684 [Pipeline] sh 00:03:34.962 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:03:34.981 [Pipeline] httpRequest 00:03:35.003 [Pipeline] echo 00:03:35.005 Sorcerer 10.211.164.101 is alive 00:03:35.017 [Pipeline] httpRequest 00:03:35.022 HttpMethod: GET 00:03:35.023 URL: http://10.211.164.101/packages/spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:03:35.023 Sending request to url: http://10.211.164.101/packages/spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:03:35.024 Response Code: HTTP/1.1 200 OK 00:03:35.025 Success: Status code 200 is in the accepted range: 200,404 00:03:35.025 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:03:37.197 [Pipeline] sh 00:03:37.473 + tar --no-same-owner -xf spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:03:40.764 [Pipeline] sh 00:03:41.043 + git -C spdk log --oneline -n5 00:03:41.043 f604975ba doc: fix deprecation.md typo 00:03:41.043 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:03:41.043 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:03:41.043 2d30d9f83 accel: introduce tasks in sequence limit 00:03:41.043 2728651ee accel: adjust task per ch define name 00:03:41.066 [Pipeline] writeFile 00:03:41.084 [Pipeline] sh 00:03:41.360 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:41.371 [Pipeline] sh 00:03:41.651 + cat autorun-spdk.conf 00:03:41.651 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:41.651 SPDK_TEST_NVMF=1 00:03:41.651 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:41.651 SPDK_TEST_USDT=1 00:03:41.651 SPDK_TEST_NVMF_MDNS=1 00:03:41.651 SPDK_RUN_UBSAN=1 00:03:41.651 NET_TYPE=virt 00:03:41.651 SPDK_JSONRPC_GO_CLIENT=1 00:03:41.651 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:41.658 RUN_NIGHTLY=0 00:03:41.661 [Pipeline] } 00:03:41.677 [Pipeline] // stage 00:03:41.693 [Pipeline] stage 00:03:41.695 [Pipeline] { (Run VM) 00:03:41.708 [Pipeline] sh 00:03:41.980 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:41.980 + echo 'Start stage prepare_nvme.sh' 00:03:41.980 Start stage prepare_nvme.sh 00:03:41.980 + [[ -n 0 ]] 00:03:41.980 + disk_prefix=ex0 00:03:41.980 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:03:41.980 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:03:41.980 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:03:41.980 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:41.980 ++ SPDK_TEST_NVMF=1 00:03:41.980 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:41.980 ++ SPDK_TEST_USDT=1 00:03:41.980 ++ SPDK_TEST_NVMF_MDNS=1 00:03:41.980 ++ SPDK_RUN_UBSAN=1 00:03:41.980 ++ NET_TYPE=virt 00:03:41.980 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:41.980 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:41.980 ++ RUN_NIGHTLY=0 00:03:41.980 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:41.980 + nvme_files=() 00:03:41.980 + declare -A nvme_files 00:03:41.980 + backend_dir=/var/lib/libvirt/images/backends 00:03:41.980 + nvme_files['nvme.img']=5G 00:03:41.980 + nvme_files['nvme-cmb.img']=5G 00:03:41.980 + nvme_files['nvme-multi0.img']=4G 00:03:41.980 + nvme_files['nvme-multi1.img']=4G 00:03:41.980 + nvme_files['nvme-multi2.img']=4G 00:03:41.980 + nvme_files['nvme-openstack.img']=8G 00:03:41.980 + nvme_files['nvme-zns.img']=5G 00:03:41.980 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:41.980 + (( SPDK_TEST_FTL == 1 )) 00:03:41.980 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:41.980 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:41.980 + for nvme in "${!nvme_files[@]}" 00:03:41.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:03:41.980 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:41.980 + for nvme in "${!nvme_files[@]}" 00:03:41.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:03:41.980 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:41.980 + for nvme in "${!nvme_files[@]}" 00:03:41.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:03:42.238 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:42.238 + for nvme in "${!nvme_files[@]}" 00:03:42.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:03:42.238 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:42.238 + for nvme in "${!nvme_files[@]}" 00:03:42.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:03:42.238 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:42.238 + for nvme in "${!nvme_files[@]}" 00:03:42.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:03:42.496 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:42.496 + for nvme in "${!nvme_files[@]}" 00:03:42.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:03:42.496 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:42.496 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:03:42.757 + echo 'End stage prepare_nvme.sh' 00:03:42.757 End stage prepare_nvme.sh 00:03:42.887 [Pipeline] sh 00:03:43.163 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:43.163 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:03:43.163 00:03:43.163 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:03:43.163 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:03:43.163 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:43.163 HELP=0 00:03:43.163 DRY_RUN=0 00:03:43.163 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:03:43.163 NVME_DISKS_TYPE=nvme,nvme, 00:03:43.163 NVME_AUTO_CREATE=0 00:03:43.163 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:03:43.163 NVME_CMB=,, 00:03:43.163 NVME_PMR=,, 00:03:43.163 NVME_ZNS=,, 00:03:43.163 NVME_MS=,, 00:03:43.163 NVME_FDP=,, 00:03:43.163 SPDK_VAGRANT_DISTRO=fedora38 00:03:43.163 SPDK_VAGRANT_VMCPU=10 00:03:43.163 SPDK_VAGRANT_VMRAM=12288 00:03:43.163 SPDK_VAGRANT_PROVIDER=libvirt 00:03:43.163 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:43.163 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:43.163 SPDK_OPENSTACK_NETWORK=0 00:03:43.163 VAGRANT_PACKAGE_BOX=0 00:03:43.163 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:43.163 FORCE_DISTRO=true 00:03:43.163 VAGRANT_BOX_VERSION= 00:03:43.163 EXTRA_VAGRANTFILES= 00:03:43.163 NIC_MODEL=e1000 00:03:43.163 00:03:43.163 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:03:43.163 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:46.443 Bringing machine 'default' up with 'libvirt' provider... 00:03:47.010 ==> default: Creating image (snapshot of base box volume). 00:03:47.268 ==> default: Creating domain with the following settings... 00:03:47.268 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721068161_afbafbc30746ef16d242 00:03:47.268 ==> default: -- Domain type: kvm 00:03:47.268 ==> default: -- Cpus: 10 00:03:47.268 ==> default: -- Feature: acpi 00:03:47.268 ==> default: -- Feature: apic 00:03:47.268 ==> default: -- Feature: pae 00:03:47.268 ==> default: -- Memory: 12288M 00:03:47.268 ==> default: -- Memory Backing: hugepages: 00:03:47.268 ==> default: -- Management MAC: 00:03:47.268 ==> default: -- Loader: 00:03:47.268 ==> default: -- Nvram: 00:03:47.268 ==> default: -- Base box: spdk/fedora38 00:03:47.268 ==> default: -- Storage pool: default 00:03:47.268 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721068161_afbafbc30746ef16d242.img (20G) 00:03:47.268 ==> default: -- Volume Cache: default 00:03:47.268 ==> default: -- Kernel: 00:03:47.268 ==> default: -- Initrd: 00:03:47.268 ==> default: -- Graphics Type: vnc 00:03:47.268 ==> default: -- Graphics Port: -1 00:03:47.268 ==> default: -- Graphics IP: 127.0.0.1 00:03:47.268 ==> default: -- Graphics Password: Not defined 00:03:47.268 ==> default: -- Video Type: cirrus 00:03:47.268 ==> default: -- Video VRAM: 9216 00:03:47.268 ==> default: -- Sound Type: 00:03:47.268 ==> default: -- Keymap: en-us 00:03:47.268 ==> default: -- TPM Path: 00:03:47.268 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:47.268 ==> default: -- Command line args: 00:03:47.268 ==> default: -> value=-device, 00:03:47.268 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:47.268 ==> default: -> value=-drive, 00:03:47.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:03:47.268 ==> default: -> value=-device, 00:03:47.268 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:47.268 ==> default: -> value=-device, 00:03:47.268 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:47.268 ==> default: -> value=-drive, 00:03:47.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:47.268 ==> default: -> value=-device, 00:03:47.268 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:47.268 ==> default: -> value=-drive, 00:03:47.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:47.268 ==> default: -> value=-device, 00:03:47.268 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:47.268 ==> default: -> value=-drive, 00:03:47.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:47.268 ==> default: -> value=-device, 00:03:47.268 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:47.268 ==> default: Creating shared folders metadata... 00:03:47.527 ==> default: Starting domain. 00:03:49.427 ==> default: Waiting for domain to get an IP address... 00:04:07.531 ==> default: Waiting for SSH to become available... 00:04:07.531 ==> default: Configuring and enabling network interfaces... 00:04:10.810 default: SSH address: 192.168.121.40:22 00:04:10.810 default: SSH username: vagrant 00:04:10.810 default: SSH auth method: private key 00:04:13.335 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:21.445 ==> default: Mounting SSHFS shared folder... 00:04:23.402 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:04:23.402 ==> default: Checking Mount.. 00:04:24.775 ==> default: Folder Successfully Mounted! 00:04:24.775 ==> default: Running provisioner: file... 00:04:25.710 default: ~/.gitconfig => .gitconfig 00:04:26.277 00:04:26.277 SUCCESS! 00:04:26.277 00:04:26.277 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:04:26.277 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:26.277 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:04:26.277 00:04:26.287 [Pipeline] } 00:04:26.306 [Pipeline] // stage 00:04:26.316 [Pipeline] dir 00:04:26.317 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:04:26.319 [Pipeline] { 00:04:26.334 [Pipeline] catchError 00:04:26.336 [Pipeline] { 00:04:26.351 [Pipeline] sh 00:04:26.629 + vagrant ssh-config --host vagrant 00:04:26.629 + sed -ne /^Host/,$p 00:04:26.629 + tee ssh_conf 00:04:31.890 Host vagrant 00:04:31.890 HostName 192.168.121.40 00:04:31.890 User vagrant 00:04:31.890 Port 22 00:04:31.890 UserKnownHostsFile /dev/null 00:04:31.890 StrictHostKeyChecking no 00:04:31.890 PasswordAuthentication no 00:04:31.890 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:04:31.890 IdentitiesOnly yes 00:04:31.890 LogLevel FATAL 00:04:31.890 ForwardAgent yes 00:04:31.890 ForwardX11 yes 00:04:31.890 00:04:31.903 [Pipeline] withEnv 00:04:31.905 [Pipeline] { 00:04:31.925 [Pipeline] sh 00:04:32.210 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:32.210 source /etc/os-release 00:04:32.210 [[ -e /image.version ]] && img=$(< /image.version) 00:04:32.210 # Minimal, systemd-like check. 00:04:32.210 if [[ -e /.dockerenv ]]; then 00:04:32.210 # Clear garbage from the node's name: 00:04:32.210 # agt-er_autotest_547-896 -> autotest_547-896 00:04:32.210 # $HOSTNAME is the actual container id 00:04:32.210 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:32.210 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:32.210 # We can assume this is a mount from a host where container is running, 00:04:32.210 # so fetch its hostname to easily identify the target swarm worker. 00:04:32.210 container="$(< /etc/hostname) ($agent)" 00:04:32.210 else 00:04:32.210 # Fallback 00:04:32.210 container=$agent 00:04:32.210 fi 00:04:32.210 fi 00:04:32.210 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:32.210 00:04:32.503 [Pipeline] } 00:04:32.525 [Pipeline] // withEnv 00:04:32.534 [Pipeline] setCustomBuildProperty 00:04:32.550 [Pipeline] stage 00:04:32.552 [Pipeline] { (Tests) 00:04:32.572 [Pipeline] sh 00:04:32.862 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:33.134 [Pipeline] sh 00:04:33.416 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:33.689 [Pipeline] timeout 00:04:33.689 Timeout set to expire in 40 min 00:04:33.691 [Pipeline] { 00:04:33.710 [Pipeline] sh 00:04:33.989 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:34.555 HEAD is now at f604975ba doc: fix deprecation.md typo 00:04:34.572 [Pipeline] sh 00:04:34.850 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:35.121 [Pipeline] sh 00:04:35.399 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:35.673 [Pipeline] sh 00:04:35.949 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:04:35.949 ++ readlink -f spdk_repo 00:04:35.949 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:35.949 + [[ -n /home/vagrant/spdk_repo ]] 00:04:35.949 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:35.949 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:35.949 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:35.949 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:35.949 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:35.949 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:04:35.949 + cd /home/vagrant/spdk_repo 00:04:35.949 + source /etc/os-release 00:04:35.949 ++ NAME='Fedora Linux' 00:04:35.949 ++ VERSION='38 (Cloud Edition)' 00:04:35.949 ++ ID=fedora 00:04:35.949 ++ VERSION_ID=38 00:04:35.949 ++ VERSION_CODENAME= 00:04:35.949 ++ PLATFORM_ID=platform:f38 00:04:35.949 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:35.949 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:35.949 ++ LOGO=fedora-logo-icon 00:04:35.949 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:35.949 ++ HOME_URL=https://fedoraproject.org/ 00:04:35.949 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:35.949 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:35.949 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:35.949 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:35.949 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:35.949 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:35.949 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:35.949 ++ SUPPORT_END=2024-05-14 00:04:35.949 ++ VARIANT='Cloud Edition' 00:04:35.949 ++ VARIANT_ID=cloud 00:04:35.949 + uname -a 00:04:35.949 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:35.949 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:36.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.513 Hugepages 00:04:36.513 node hugesize free / total 00:04:36.513 node0 1048576kB 0 / 0 00:04:36.513 node0 2048kB 0 / 0 00:04:36.513 00:04:36.513 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.513 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:36.513 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:36.513 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:36.513 + rm -f /tmp/spdk-ld-path 00:04:36.513 + source autorun-spdk.conf 00:04:36.513 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:36.513 ++ SPDK_TEST_NVMF=1 00:04:36.513 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:36.513 ++ SPDK_TEST_USDT=1 00:04:36.513 ++ SPDK_TEST_NVMF_MDNS=1 00:04:36.513 ++ SPDK_RUN_UBSAN=1 00:04:36.513 ++ NET_TYPE=virt 00:04:36.513 ++ SPDK_JSONRPC_GO_CLIENT=1 00:04:36.513 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:36.513 ++ RUN_NIGHTLY=0 00:04:36.513 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:36.513 + [[ -n '' ]] 00:04:36.513 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:36.513 + for M in /var/spdk/build-*-manifest.txt 00:04:36.513 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:36.513 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:36.770 + for M in /var/spdk/build-*-manifest.txt 00:04:36.770 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:36.770 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:36.770 ++ uname 00:04:36.770 + [[ Linux == \L\i\n\u\x ]] 00:04:36.770 + sudo dmesg -T 00:04:36.770 + sudo dmesg --clear 00:04:36.770 + dmesg_pid=5164 00:04:36.770 + sudo dmesg -Tw 00:04:36.770 + [[ Fedora Linux == FreeBSD ]] 00:04:36.770 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:36.770 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:36.770 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:36.770 + [[ -x /usr/src/fio-static/fio ]] 00:04:36.770 + export FIO_BIN=/usr/src/fio-static/fio 00:04:36.770 + FIO_BIN=/usr/src/fio-static/fio 00:04:36.770 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:36.770 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:36.770 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:36.770 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:36.770 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:36.770 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:36.770 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:36.770 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:36.770 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:36.770 Test configuration: 00:04:36.770 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:36.770 SPDK_TEST_NVMF=1 00:04:36.770 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:36.770 SPDK_TEST_USDT=1 00:04:36.770 SPDK_TEST_NVMF_MDNS=1 00:04:36.770 SPDK_RUN_UBSAN=1 00:04:36.770 NET_TYPE=virt 00:04:36.770 SPDK_JSONRPC_GO_CLIENT=1 00:04:36.770 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:36.770 RUN_NIGHTLY=0 18:30:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.770 18:30:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:36.770 18:30:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.770 18:30:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.770 18:30:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.770 18:30:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.770 18:30:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.770 18:30:11 -- paths/export.sh@5 -- $ export PATH 00:04:36.771 18:30:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.771 18:30:11 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:36.771 18:30:11 -- common/autobuild_common.sh@444 -- $ date +%s 00:04:36.771 18:30:11 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721068211.XXXXXX 00:04:36.771 18:30:11 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721068211.hH22BV 00:04:36.771 18:30:11 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:04:36.771 18:30:11 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:04:36.771 18:30:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:36.771 18:30:11 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:36.771 18:30:11 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:36.771 18:30:11 -- common/autobuild_common.sh@460 -- $ get_config_params 00:04:36.771 18:30:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:04:36.771 18:30:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:36.771 18:30:11 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:04:36.771 18:30:11 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:04:36.771 18:30:11 -- pm/common@17 -- $ local monitor 00:04:36.771 18:30:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.771 18:30:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.771 18:30:11 -- pm/common@25 -- $ sleep 1 00:04:36.771 18:30:11 -- pm/common@21 -- $ date +%s 00:04:36.771 18:30:11 -- pm/common@21 -- $ date +%s 00:04:36.771 18:30:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721068211 00:04:36.771 18:30:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721068211 00:04:36.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721068211_collect-vmstat.pm.log 00:04:36.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721068211_collect-cpu-load.pm.log 00:04:38.144 18:30:12 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:04:38.144 18:30:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:38.144 18:30:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:38.144 18:30:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:38.144 18:30:12 -- spdk/autobuild.sh@16 -- $ date -u 00:04:38.144 Mon Jul 15 06:30:12 PM UTC 2024 00:04:38.144 18:30:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:38.144 v24.09-pre-210-gf604975ba 00:04:38.144 18:30:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:38.144 18:30:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:38.144 18:30:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:38.144 18:30:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:38.144 18:30:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:38.144 18:30:12 -- common/autotest_common.sh@10 -- $ set +x 00:04:38.144 ************************************ 00:04:38.144 START TEST ubsan 00:04:38.144 ************************************ 00:04:38.144 using ubsan 00:04:38.144 18:30:12 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:04:38.144 00:04:38.144 real 0m0.000s 00:04:38.144 user 0m0.000s 00:04:38.144 sys 0m0.000s 00:04:38.144 18:30:12 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:38.144 18:30:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:38.144 ************************************ 00:04:38.144 END TEST ubsan 00:04:38.144 ************************************ 00:04:38.144 18:30:12 -- common/autotest_common.sh@1142 -- $ return 0 00:04:38.144 18:30:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:38.144 18:30:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:38.144 18:30:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:38.144 18:30:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:38.144 18:30:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:38.144 18:30:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:38.144 18:30:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:38.145 18:30:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:38.145 18:30:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:04:38.145 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:38.145 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:38.710 Using 'verbs' RDMA provider 00:04:54.527 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:06.722 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:06.722 go version go1.21.1 linux/amd64 00:05:06.722 Creating mk/config.mk...done. 00:05:06.722 Creating mk/cc.flags.mk...done. 00:05:06.722 Type 'make' to build. 00:05:06.722 18:30:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:05:06.722 18:30:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:05:06.722 18:30:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:06.722 18:30:39 -- common/autotest_common.sh@10 -- $ set +x 00:05:06.723 ************************************ 00:05:06.723 START TEST make 00:05:06.723 ************************************ 00:05:06.723 18:30:39 make -- common/autotest_common.sh@1123 -- $ make -j10 00:05:06.723 make[1]: Nothing to be done for 'all'. 00:05:18.915 The Meson build system 00:05:18.915 Version: 1.3.1 00:05:18.915 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:18.915 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:18.915 Build type: native build 00:05:18.915 Program cat found: YES (/usr/bin/cat) 00:05:18.915 Project name: DPDK 00:05:18.915 Project version: 24.03.0 00:05:18.915 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:18.916 C linker for the host machine: cc ld.bfd 2.39-16 00:05:18.916 Host machine cpu family: x86_64 00:05:18.916 Host machine cpu: x86_64 00:05:18.916 Message: ## Building in Developer Mode ## 00:05:18.916 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:18.916 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:18.916 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:18.916 Program python3 found: YES (/usr/bin/python3) 00:05:18.916 Program cat found: YES (/usr/bin/cat) 00:05:18.916 Compiler for C supports arguments -march=native: YES 00:05:18.916 Checking for size of "void *" : 8 00:05:18.916 Checking for size of "void *" : 8 (cached) 00:05:18.916 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:05:18.916 Library m found: YES 00:05:18.916 Library numa found: YES 00:05:18.916 Has header "numaif.h" : YES 00:05:18.916 Library fdt found: NO 00:05:18.916 Library execinfo found: NO 00:05:18.916 Has header "execinfo.h" : YES 00:05:18.916 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:18.916 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:18.916 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:18.916 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:18.916 Run-time dependency openssl found: YES 3.0.9 00:05:18.916 Run-time dependency libpcap found: YES 1.10.4 00:05:18.916 Has header "pcap.h" with dependency libpcap: YES 00:05:18.916 Compiler for C supports arguments -Wcast-qual: YES 00:05:18.916 Compiler for C supports arguments -Wdeprecated: YES 00:05:18.916 Compiler for C supports arguments -Wformat: YES 00:05:18.916 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:18.916 Compiler for C supports arguments -Wformat-security: NO 00:05:18.916 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:18.916 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:18.916 Compiler for C supports arguments -Wnested-externs: YES 00:05:18.916 Compiler for C supports arguments -Wold-style-definition: YES 00:05:18.916 Compiler for C supports arguments -Wpointer-arith: YES 00:05:18.916 Compiler for C supports arguments -Wsign-compare: YES 00:05:18.916 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:18.916 Compiler for C supports arguments -Wundef: YES 00:05:18.916 Compiler for C supports arguments -Wwrite-strings: YES 00:05:18.916 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:18.916 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:18.916 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:18.916 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:18.916 Program objdump found: YES (/usr/bin/objdump) 00:05:18.916 Compiler for C supports arguments -mavx512f: YES 00:05:18.916 Checking if "AVX512 checking" compiles: YES 00:05:18.916 Fetching value of define "__SSE4_2__" : 1 00:05:18.916 Fetching value of define "__AES__" : 1 00:05:18.916 Fetching value of define "__AVX__" : 1 00:05:18.916 Fetching value of define "__AVX2__" : 1 00:05:18.916 Fetching value of define "__AVX512BW__" : 1 00:05:18.916 Fetching value of define "__AVX512CD__" : 1 00:05:18.916 Fetching value of define "__AVX512DQ__" : 1 00:05:18.916 Fetching value of define "__AVX512F__" : 1 00:05:18.916 Fetching value of define "__AVX512VL__" : 1 00:05:18.916 Fetching value of define "__PCLMUL__" : 1 00:05:18.916 Fetching value of define "__RDRND__" : 1 00:05:18.916 Fetching value of define "__RDSEED__" : 1 00:05:18.916 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:18.916 Fetching value of define "__znver1__" : (undefined) 00:05:18.916 Fetching value of define "__znver2__" : (undefined) 00:05:18.916 Fetching value of define "__znver3__" : (undefined) 00:05:18.916 Fetching value of define "__znver4__" : (undefined) 00:05:18.916 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:18.916 Message: lib/log: Defining dependency "log" 00:05:18.916 Message: lib/kvargs: Defining dependency "kvargs" 00:05:18.916 Message: lib/telemetry: Defining dependency "telemetry" 00:05:18.916 Checking for function "getentropy" : NO 00:05:18.916 Message: lib/eal: Defining dependency "eal" 00:05:18.916 Message: lib/ring: Defining dependency "ring" 00:05:18.916 Message: lib/rcu: Defining dependency "rcu" 00:05:18.916 Message: lib/mempool: Defining dependency "mempool" 00:05:18.916 Message: lib/mbuf: Defining dependency "mbuf" 00:05:18.916 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:18.916 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:18.916 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:18.916 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:18.916 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:18.916 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:18.916 Compiler for C supports arguments -mpclmul: YES 00:05:18.916 Compiler for C supports arguments -maes: YES 00:05:18.916 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:18.916 Compiler for C supports arguments -mavx512bw: YES 00:05:18.916 Compiler for C supports arguments -mavx512dq: YES 00:05:18.916 Compiler for C supports arguments -mavx512vl: YES 00:05:18.916 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:18.916 Compiler for C supports arguments -mavx2: YES 00:05:18.916 Compiler for C supports arguments -mavx: YES 00:05:18.916 Message: lib/net: Defining dependency "net" 00:05:18.916 Message: lib/meter: Defining dependency "meter" 00:05:18.916 Message: lib/ethdev: Defining dependency "ethdev" 00:05:18.916 Message: lib/pci: Defining dependency "pci" 00:05:18.916 Message: lib/cmdline: Defining dependency "cmdline" 00:05:18.916 Message: lib/hash: Defining dependency "hash" 00:05:18.916 Message: lib/timer: Defining dependency "timer" 00:05:18.916 Message: lib/compressdev: Defining dependency "compressdev" 00:05:18.916 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:18.916 Message: lib/dmadev: Defining dependency "dmadev" 00:05:18.916 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:18.916 Message: lib/power: Defining dependency "power" 00:05:18.916 Message: lib/reorder: Defining dependency "reorder" 00:05:18.916 Message: lib/security: Defining dependency "security" 00:05:18.916 Has header "linux/userfaultfd.h" : YES 00:05:18.916 Has header "linux/vduse.h" : YES 00:05:18.916 Message: lib/vhost: Defining dependency "vhost" 00:05:18.916 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:18.916 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:18.916 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:18.916 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:18.916 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:18.916 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:18.916 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:18.916 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:18.916 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:18.916 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:18.916 Program doxygen found: YES (/usr/bin/doxygen) 00:05:18.916 Configuring doxy-api-html.conf using configuration 00:05:18.916 Configuring doxy-api-man.conf using configuration 00:05:18.916 Program mandb found: YES (/usr/bin/mandb) 00:05:18.916 Program sphinx-build found: NO 00:05:18.916 Configuring rte_build_config.h using configuration 00:05:18.916 Message: 00:05:18.916 ================= 00:05:18.916 Applications Enabled 00:05:18.916 ================= 00:05:18.916 00:05:18.916 apps: 00:05:18.916 00:05:18.916 00:05:18.916 Message: 00:05:18.916 ================= 00:05:18.916 Libraries Enabled 00:05:18.916 ================= 00:05:18.916 00:05:18.916 libs: 00:05:18.916 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:18.916 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:18.916 cryptodev, dmadev, power, reorder, security, vhost, 00:05:18.916 00:05:18.916 Message: 00:05:18.916 =============== 00:05:18.916 Drivers Enabled 00:05:18.916 =============== 00:05:18.916 00:05:18.916 common: 00:05:18.916 00:05:18.916 bus: 00:05:18.916 pci, vdev, 00:05:18.916 mempool: 00:05:18.916 ring, 00:05:18.916 dma: 00:05:18.916 00:05:18.916 net: 00:05:18.916 00:05:18.916 crypto: 00:05:18.916 00:05:18.916 compress: 00:05:18.916 00:05:18.916 vdpa: 00:05:18.916 00:05:18.916 00:05:18.916 Message: 00:05:18.916 ================= 00:05:18.916 Content Skipped 00:05:18.916 ================= 00:05:18.916 00:05:18.916 apps: 00:05:18.916 dumpcap: explicitly disabled via build config 00:05:18.916 graph: explicitly disabled via build config 00:05:18.916 pdump: explicitly disabled via build config 00:05:18.916 proc-info: explicitly disabled via build config 00:05:18.916 test-acl: explicitly disabled via build config 00:05:18.916 test-bbdev: explicitly disabled via build config 00:05:18.916 test-cmdline: explicitly disabled via build config 00:05:18.916 test-compress-perf: explicitly disabled via build config 00:05:18.916 test-crypto-perf: explicitly disabled via build config 00:05:18.916 test-dma-perf: explicitly disabled via build config 00:05:18.916 test-eventdev: explicitly disabled via build config 00:05:18.916 test-fib: explicitly disabled via build config 00:05:18.916 test-flow-perf: explicitly disabled via build config 00:05:18.916 test-gpudev: explicitly disabled via build config 00:05:18.916 test-mldev: explicitly disabled via build config 00:05:18.916 test-pipeline: explicitly disabled via build config 00:05:18.916 test-pmd: explicitly disabled via build config 00:05:18.916 test-regex: explicitly disabled via build config 00:05:18.916 test-sad: explicitly disabled via build config 00:05:18.916 test-security-perf: explicitly disabled via build config 00:05:18.916 00:05:18.916 libs: 00:05:18.916 argparse: explicitly disabled via build config 00:05:18.916 metrics: explicitly disabled via build config 00:05:18.916 acl: explicitly disabled via build config 00:05:18.916 bbdev: explicitly disabled via build config 00:05:18.916 bitratestats: explicitly disabled via build config 00:05:18.916 bpf: explicitly disabled via build config 00:05:18.916 cfgfile: explicitly disabled via build config 00:05:18.916 distributor: explicitly disabled via build config 00:05:18.916 efd: explicitly disabled via build config 00:05:18.916 eventdev: explicitly disabled via build config 00:05:18.916 dispatcher: explicitly disabled via build config 00:05:18.916 gpudev: explicitly disabled via build config 00:05:18.916 gro: explicitly disabled via build config 00:05:18.916 gso: explicitly disabled via build config 00:05:18.916 ip_frag: explicitly disabled via build config 00:05:18.916 jobstats: explicitly disabled via build config 00:05:18.916 latencystats: explicitly disabled via build config 00:05:18.916 lpm: explicitly disabled via build config 00:05:18.916 member: explicitly disabled via build config 00:05:18.916 pcapng: explicitly disabled via build config 00:05:18.916 rawdev: explicitly disabled via build config 00:05:18.916 regexdev: explicitly disabled via build config 00:05:18.916 mldev: explicitly disabled via build config 00:05:18.916 rib: explicitly disabled via build config 00:05:18.916 sched: explicitly disabled via build config 00:05:18.916 stack: explicitly disabled via build config 00:05:18.916 ipsec: explicitly disabled via build config 00:05:18.916 pdcp: explicitly disabled via build config 00:05:18.916 fib: explicitly disabled via build config 00:05:18.916 port: explicitly disabled via build config 00:05:18.916 pdump: explicitly disabled via build config 00:05:18.916 table: explicitly disabled via build config 00:05:18.916 pipeline: explicitly disabled via build config 00:05:18.916 graph: explicitly disabled via build config 00:05:18.916 node: explicitly disabled via build config 00:05:18.916 00:05:18.916 drivers: 00:05:18.916 common/cpt: not in enabled drivers build config 00:05:18.916 common/dpaax: not in enabled drivers build config 00:05:18.916 common/iavf: not in enabled drivers build config 00:05:18.916 common/idpf: not in enabled drivers build config 00:05:18.916 common/ionic: not in enabled drivers build config 00:05:18.916 common/mvep: not in enabled drivers build config 00:05:18.916 common/octeontx: not in enabled drivers build config 00:05:18.916 bus/auxiliary: not in enabled drivers build config 00:05:18.916 bus/cdx: not in enabled drivers build config 00:05:18.916 bus/dpaa: not in enabled drivers build config 00:05:18.916 bus/fslmc: not in enabled drivers build config 00:05:18.916 bus/ifpga: not in enabled drivers build config 00:05:18.917 bus/platform: not in enabled drivers build config 00:05:18.917 bus/uacce: not in enabled drivers build config 00:05:18.917 bus/vmbus: not in enabled drivers build config 00:05:18.917 common/cnxk: not in enabled drivers build config 00:05:18.917 common/mlx5: not in enabled drivers build config 00:05:18.917 common/nfp: not in enabled drivers build config 00:05:18.917 common/nitrox: not in enabled drivers build config 00:05:18.917 common/qat: not in enabled drivers build config 00:05:18.917 common/sfc_efx: not in enabled drivers build config 00:05:18.917 mempool/bucket: not in enabled drivers build config 00:05:18.917 mempool/cnxk: not in enabled drivers build config 00:05:18.917 mempool/dpaa: not in enabled drivers build config 00:05:18.917 mempool/dpaa2: not in enabled drivers build config 00:05:18.917 mempool/octeontx: not in enabled drivers build config 00:05:18.917 mempool/stack: not in enabled drivers build config 00:05:18.917 dma/cnxk: not in enabled drivers build config 00:05:18.917 dma/dpaa: not in enabled drivers build config 00:05:18.917 dma/dpaa2: not in enabled drivers build config 00:05:18.917 dma/hisilicon: not in enabled drivers build config 00:05:18.917 dma/idxd: not in enabled drivers build config 00:05:18.917 dma/ioat: not in enabled drivers build config 00:05:18.917 dma/skeleton: not in enabled drivers build config 00:05:18.917 net/af_packet: not in enabled drivers build config 00:05:18.917 net/af_xdp: not in enabled drivers build config 00:05:18.917 net/ark: not in enabled drivers build config 00:05:18.917 net/atlantic: not in enabled drivers build config 00:05:18.917 net/avp: not in enabled drivers build config 00:05:18.917 net/axgbe: not in enabled drivers build config 00:05:18.917 net/bnx2x: not in enabled drivers build config 00:05:18.917 net/bnxt: not in enabled drivers build config 00:05:18.917 net/bonding: not in enabled drivers build config 00:05:18.917 net/cnxk: not in enabled drivers build config 00:05:18.917 net/cpfl: not in enabled drivers build config 00:05:18.917 net/cxgbe: not in enabled drivers build config 00:05:18.917 net/dpaa: not in enabled drivers build config 00:05:18.917 net/dpaa2: not in enabled drivers build config 00:05:18.917 net/e1000: not in enabled drivers build config 00:05:18.917 net/ena: not in enabled drivers build config 00:05:18.917 net/enetc: not in enabled drivers build config 00:05:18.917 net/enetfec: not in enabled drivers build config 00:05:18.917 net/enic: not in enabled drivers build config 00:05:18.917 net/failsafe: not in enabled drivers build config 00:05:18.917 net/fm10k: not in enabled drivers build config 00:05:18.917 net/gve: not in enabled drivers build config 00:05:18.917 net/hinic: not in enabled drivers build config 00:05:18.917 net/hns3: not in enabled drivers build config 00:05:18.917 net/i40e: not in enabled drivers build config 00:05:18.917 net/iavf: not in enabled drivers build config 00:05:18.917 net/ice: not in enabled drivers build config 00:05:18.917 net/idpf: not in enabled drivers build config 00:05:18.917 net/igc: not in enabled drivers build config 00:05:18.917 net/ionic: not in enabled drivers build config 00:05:18.917 net/ipn3ke: not in enabled drivers build config 00:05:18.917 net/ixgbe: not in enabled drivers build config 00:05:18.917 net/mana: not in enabled drivers build config 00:05:18.917 net/memif: not in enabled drivers build config 00:05:18.917 net/mlx4: not in enabled drivers build config 00:05:18.917 net/mlx5: not in enabled drivers build config 00:05:18.917 net/mvneta: not in enabled drivers build config 00:05:18.917 net/mvpp2: not in enabled drivers build config 00:05:18.917 net/netvsc: not in enabled drivers build config 00:05:18.917 net/nfb: not in enabled drivers build config 00:05:18.917 net/nfp: not in enabled drivers build config 00:05:18.917 net/ngbe: not in enabled drivers build config 00:05:18.917 net/null: not in enabled drivers build config 00:05:18.917 net/octeontx: not in enabled drivers build config 00:05:18.917 net/octeon_ep: not in enabled drivers build config 00:05:18.917 net/pcap: not in enabled drivers build config 00:05:18.917 net/pfe: not in enabled drivers build config 00:05:18.917 net/qede: not in enabled drivers build config 00:05:18.917 net/ring: not in enabled drivers build config 00:05:18.917 net/sfc: not in enabled drivers build config 00:05:18.917 net/softnic: not in enabled drivers build config 00:05:18.917 net/tap: not in enabled drivers build config 00:05:18.917 net/thunderx: not in enabled drivers build config 00:05:18.917 net/txgbe: not in enabled drivers build config 00:05:18.917 net/vdev_netvsc: not in enabled drivers build config 00:05:18.917 net/vhost: not in enabled drivers build config 00:05:18.917 net/virtio: not in enabled drivers build config 00:05:18.917 net/vmxnet3: not in enabled drivers build config 00:05:18.917 raw/*: missing internal dependency, "rawdev" 00:05:18.917 crypto/armv8: not in enabled drivers build config 00:05:18.917 crypto/bcmfs: not in enabled drivers build config 00:05:18.917 crypto/caam_jr: not in enabled drivers build config 00:05:18.917 crypto/ccp: not in enabled drivers build config 00:05:18.917 crypto/cnxk: not in enabled drivers build config 00:05:18.917 crypto/dpaa_sec: not in enabled drivers build config 00:05:18.917 crypto/dpaa2_sec: not in enabled drivers build config 00:05:18.917 crypto/ipsec_mb: not in enabled drivers build config 00:05:18.917 crypto/mlx5: not in enabled drivers build config 00:05:18.917 crypto/mvsam: not in enabled drivers build config 00:05:18.917 crypto/nitrox: not in enabled drivers build config 00:05:18.917 crypto/null: not in enabled drivers build config 00:05:18.917 crypto/octeontx: not in enabled drivers build config 00:05:18.917 crypto/openssl: not in enabled drivers build config 00:05:18.917 crypto/scheduler: not in enabled drivers build config 00:05:18.917 crypto/uadk: not in enabled drivers build config 00:05:18.917 crypto/virtio: not in enabled drivers build config 00:05:18.917 compress/isal: not in enabled drivers build config 00:05:18.917 compress/mlx5: not in enabled drivers build config 00:05:18.917 compress/nitrox: not in enabled drivers build config 00:05:18.917 compress/octeontx: not in enabled drivers build config 00:05:18.917 compress/zlib: not in enabled drivers build config 00:05:18.917 regex/*: missing internal dependency, "regexdev" 00:05:18.917 ml/*: missing internal dependency, "mldev" 00:05:18.917 vdpa/ifc: not in enabled drivers build config 00:05:18.917 vdpa/mlx5: not in enabled drivers build config 00:05:18.917 vdpa/nfp: not in enabled drivers build config 00:05:18.917 vdpa/sfc: not in enabled drivers build config 00:05:18.917 event/*: missing internal dependency, "eventdev" 00:05:18.917 baseband/*: missing internal dependency, "bbdev" 00:05:18.917 gpu/*: missing internal dependency, "gpudev" 00:05:18.917 00:05:18.917 00:05:19.482 Build targets in project: 85 00:05:19.482 00:05:19.482 DPDK 24.03.0 00:05:19.482 00:05:19.482 User defined options 00:05:19.482 buildtype : debug 00:05:19.482 default_library : shared 00:05:19.482 libdir : lib 00:05:19.482 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:19.482 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:19.482 c_link_args : 00:05:19.482 cpu_instruction_set: native 00:05:19.482 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:19.482 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:19.482 enable_docs : false 00:05:19.482 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:19.482 enable_kmods : false 00:05:19.482 max_lcores : 128 00:05:19.482 tests : false 00:05:19.482 00:05:19.482 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:20.047 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:20.047 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:20.047 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:20.047 [3/268] Linking static target lib/librte_kvargs.a 00:05:20.047 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:20.047 [5/268] Linking static target lib/librte_log.a 00:05:20.306 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:20.608 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:20.867 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:20.867 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:20.867 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.867 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:20.867 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:20.867 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:20.867 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:20.867 [15/268] Linking static target lib/librte_telemetry.a 00:05:20.867 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:21.125 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:21.125 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:21.383 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.640 [20/268] Linking target lib/librte_log.so.24.1 00:05:21.640 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:21.640 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:21.898 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:21.898 [24/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:21.898 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:21.898 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:21.898 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:21.898 [28/268] Linking target lib/librte_kvargs.so.24.1 00:05:21.898 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:22.156 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.156 [31/268] Linking target lib/librte_telemetry.so.24.1 00:05:22.156 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:22.156 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:22.156 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:22.156 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:22.414 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:22.414 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:22.414 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:22.672 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:22.672 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:22.672 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:22.931 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:22.931 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:22.931 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:22.931 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:22.931 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:23.190 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:23.448 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:23.448 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:23.448 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:23.448 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:23.448 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:23.448 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:23.706 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:23.706 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:23.706 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:23.965 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:23.965 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:24.222 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:24.222 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:24.222 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:24.223 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:24.223 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:24.479 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:24.479 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:24.736 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:24.736 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:24.736 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:24.993 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:24.993 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:24.993 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:24.993 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:25.252 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:25.252 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:25.252 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:25.252 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:25.510 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:25.510 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:25.510 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:25.776 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:25.776 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:25.776 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:26.033 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:26.033 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:26.033 [85/268] Linking static target lib/librte_ring.a 00:05:26.033 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:26.289 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:26.289 [88/268] Linking static target lib/librte_rcu.a 00:05:26.289 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:26.289 [90/268] Linking static target lib/librte_eal.a 00:05:26.289 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:26.289 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:26.547 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:26.547 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:26.547 [95/268] Linking static target lib/librte_mempool.a 00:05:26.547 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.803 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:26.803 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:26.803 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.059 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:27.059 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:27.059 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:27.059 [103/268] Linking static target lib/librte_mbuf.a 00:05:27.317 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:27.317 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:27.575 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:27.575 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:27.575 [108/268] Linking static target lib/librte_meter.a 00:05:27.575 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:27.575 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:27.575 [111/268] Linking static target lib/librte_net.a 00:05:28.141 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.141 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.141 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.399 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:28.399 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:28.399 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.657 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:28.657 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:28.916 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:29.172 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:29.172 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:29.429 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:29.687 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:29.687 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:29.945 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:29.945 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:29.945 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:29.945 [129/268] Linking static target lib/librte_pci.a 00:05:29.945 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:29.945 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:29.945 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:29.945 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:29.945 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:30.203 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:30.203 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:30.203 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:30.203 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:30.203 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:30.203 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:30.463 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:30.463 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:30.463 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:30.463 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:30.463 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.463 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:30.723 [147/268] Linking static target lib/librte_cmdline.a 00:05:30.981 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:30.981 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:30.981 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:30.981 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:30.981 [152/268] Linking static target lib/librte_timer.a 00:05:30.981 [153/268] Linking static target lib/librte_ethdev.a 00:05:31.239 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:31.239 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:31.239 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:31.497 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:31.497 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:31.497 [159/268] Linking static target lib/librte_compressdev.a 00:05:31.755 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:31.755 [161/268] Linking static target lib/librte_hash.a 00:05:32.013 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:32.013 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:32.013 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.013 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:32.271 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:32.529 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:32.529 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:32.529 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:32.529 [170/268] Linking static target lib/librte_dmadev.a 00:05:32.529 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:32.529 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:32.529 [173/268] Linking static target lib/librte_cryptodev.a 00:05:32.787 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:32.787 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.787 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.787 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:33.045 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:33.045 [179/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.347 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:33.347 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:33.347 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:33.347 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:33.604 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:33.604 [185/268] Linking static target lib/librte_power.a 00:05:33.604 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.862 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:34.120 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:34.120 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:34.120 [190/268] Linking static target lib/librte_security.a 00:05:34.120 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:34.377 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:34.377 [193/268] Linking static target lib/librte_reorder.a 00:05:34.942 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.942 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:35.199 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.199 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:35.199 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.199 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:35.766 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:35.766 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:35.766 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:36.024 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.024 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:36.024 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:36.282 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:36.282 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:36.282 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:36.282 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:36.282 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:36.282 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:36.540 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:36.541 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:36.541 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:36.541 [215/268] Linking static target drivers/librte_bus_vdev.a 00:05:36.541 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:36.541 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:36.541 [218/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:36.541 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:36.541 [220/268] Linking static target drivers/librte_bus_pci.a 00:05:36.799 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:36.799 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:36.799 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.057 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:37.057 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:37.057 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:37.057 [227/268] Linking static target drivers/librte_mempool_ring.a 00:05:37.057 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.623 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:37.623 [230/268] Linking static target lib/librte_vhost.a 00:05:39.524 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.782 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.782 [233/268] Linking target lib/librte_eal.so.24.1 00:05:40.040 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:40.040 [235/268] Linking target lib/librte_meter.so.24.1 00:05:40.040 [236/268] Linking target lib/librte_pci.so.24.1 00:05:40.040 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:40.040 [238/268] Linking target lib/librte_timer.so.24.1 00:05:40.040 [239/268] Linking target lib/librte_ring.so.24.1 00:05:40.040 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:40.300 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:40.300 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:40.300 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:40.300 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:40.300 [245/268] Linking target lib/librte_mempool.so.24.1 00:05:40.300 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:40.300 [247/268] Linking target lib/librte_rcu.so.24.1 00:05:40.300 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:40.558 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:40.558 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:40.558 [251/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.558 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:40.558 [253/268] Linking target lib/librte_mbuf.so.24.1 00:05:40.816 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:40.816 [255/268] Linking target lib/librte_reorder.so.24.1 00:05:40.816 [256/268] Linking target lib/librte_net.so.24.1 00:05:40.816 [257/268] Linking target lib/librte_compressdev.so.24.1 00:05:40.816 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:05:41.075 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:41.075 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:41.075 [261/268] Linking target lib/librte_cmdline.so.24.1 00:05:41.075 [262/268] Linking target lib/librte_security.so.24.1 00:05:41.075 [263/268] Linking target lib/librte_hash.so.24.1 00:05:41.075 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:41.333 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:41.333 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:41.333 [267/268] Linking target lib/librte_power.so.24.1 00:05:41.333 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:41.333 INFO: autodetecting backend as ninja 00:05:41.333 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:42.708 CC lib/ut_mock/mock.o 00:05:42.708 CC lib/log/log_flags.o 00:05:42.708 CC lib/log/log.o 00:05:42.708 CC lib/log/log_deprecated.o 00:05:42.708 CC lib/ut/ut.o 00:05:42.966 LIB libspdk_ut.a 00:05:42.966 LIB libspdk_log.a 00:05:42.966 SO libspdk_ut.so.2.0 00:05:43.223 LIB libspdk_ut_mock.a 00:05:43.223 SO libspdk_log.so.7.0 00:05:43.223 SO libspdk_ut_mock.so.6.0 00:05:43.224 SYMLINK libspdk_ut.so 00:05:43.224 SYMLINK libspdk_log.so 00:05:43.224 SYMLINK libspdk_ut_mock.so 00:05:43.482 CC lib/util/base64.o 00:05:43.482 CC lib/util/cpuset.o 00:05:43.482 CC lib/util/bit_array.o 00:05:43.482 CC lib/util/crc16.o 00:05:43.482 CC lib/dma/dma.o 00:05:43.482 CC lib/util/crc32.o 00:05:43.482 CXX lib/trace_parser/trace.o 00:05:43.482 CC lib/util/crc32c.o 00:05:43.482 CC lib/ioat/ioat.o 00:05:43.740 CC lib/vfio_user/host/vfio_user_pci.o 00:05:43.740 CC lib/util/crc32_ieee.o 00:05:43.740 CC lib/vfio_user/host/vfio_user.o 00:05:43.740 CC lib/util/crc64.o 00:05:43.740 CC lib/util/dif.o 00:05:43.740 CC lib/util/fd.o 00:05:43.740 CC lib/util/file.o 00:05:43.740 LIB libspdk_dma.a 00:05:43.740 CC lib/util/hexlify.o 00:05:43.740 SO libspdk_dma.so.4.0 00:05:43.740 LIB libspdk_ioat.a 00:05:43.998 CC lib/util/iov.o 00:05:43.998 CC lib/util/math.o 00:05:43.998 SYMLINK libspdk_dma.so 00:05:43.998 CC lib/util/pipe.o 00:05:43.998 CC lib/util/strerror_tls.o 00:05:43.998 SO libspdk_ioat.so.7.0 00:05:43.998 LIB libspdk_vfio_user.a 00:05:43.998 SYMLINK libspdk_ioat.so 00:05:43.998 CC lib/util/string.o 00:05:43.998 CC lib/util/uuid.o 00:05:43.998 SO libspdk_vfio_user.so.5.0 00:05:43.998 CC lib/util/fd_group.o 00:05:43.998 CC lib/util/xor.o 00:05:43.998 CC lib/util/zipf.o 00:05:43.998 SYMLINK libspdk_vfio_user.so 00:05:44.561 LIB libspdk_util.a 00:05:44.561 SO libspdk_util.so.9.1 00:05:44.818 LIB libspdk_trace_parser.a 00:05:44.818 SYMLINK libspdk_util.so 00:05:44.818 SO libspdk_trace_parser.so.5.0 00:05:45.074 SYMLINK libspdk_trace_parser.so 00:05:45.074 CC lib/json/json_parse.o 00:05:45.074 CC lib/rdma_provider/common.o 00:05:45.074 CC lib/json/json_util.o 00:05:45.074 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:45.074 CC lib/json/json_write.o 00:05:45.074 CC lib/vmd/vmd.o 00:05:45.074 CC lib/conf/conf.o 00:05:45.074 CC lib/idxd/idxd.o 00:05:45.074 CC lib/env_dpdk/env.o 00:05:45.074 CC lib/rdma_utils/rdma_utils.o 00:05:45.331 CC lib/idxd/idxd_user.o 00:05:45.331 LIB libspdk_rdma_provider.a 00:05:45.331 SO libspdk_rdma_provider.so.6.0 00:05:45.331 SYMLINK libspdk_rdma_provider.so 00:05:45.331 CC lib/env_dpdk/memory.o 00:05:45.331 CC lib/idxd/idxd_kernel.o 00:05:45.331 LIB libspdk_conf.a 00:05:45.331 LIB libspdk_rdma_utils.a 00:05:45.331 CC lib/vmd/led.o 00:05:45.331 SO libspdk_conf.so.6.0 00:05:45.331 SO libspdk_rdma_utils.so.1.0 00:05:45.586 LIB libspdk_json.a 00:05:45.586 SYMLINK libspdk_conf.so 00:05:45.586 CC lib/env_dpdk/pci.o 00:05:45.586 SYMLINK libspdk_rdma_utils.so 00:05:45.586 CC lib/env_dpdk/init.o 00:05:45.586 CC lib/env_dpdk/threads.o 00:05:45.586 CC lib/env_dpdk/pci_ioat.o 00:05:45.586 SO libspdk_json.so.6.0 00:05:45.586 CC lib/env_dpdk/pci_virtio.o 00:05:45.586 SYMLINK libspdk_json.so 00:05:45.586 CC lib/env_dpdk/pci_vmd.o 00:05:45.586 CC lib/env_dpdk/pci_idxd.o 00:05:45.586 LIB libspdk_idxd.a 00:05:45.843 SO libspdk_idxd.so.12.0 00:05:45.843 LIB libspdk_vmd.a 00:05:45.843 CC lib/env_dpdk/pci_event.o 00:05:45.843 SO libspdk_vmd.so.6.0 00:05:45.843 CC lib/env_dpdk/sigbus_handler.o 00:05:45.843 CC lib/env_dpdk/pci_dpdk.o 00:05:45.843 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:45.843 CC lib/jsonrpc/jsonrpc_server.o 00:05:45.843 SYMLINK libspdk_idxd.so 00:05:45.843 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:45.843 SYMLINK libspdk_vmd.so 00:05:45.843 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:45.843 CC lib/jsonrpc/jsonrpc_client.o 00:05:46.099 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:46.356 LIB libspdk_jsonrpc.a 00:05:46.356 SO libspdk_jsonrpc.so.6.0 00:05:46.612 SYMLINK libspdk_jsonrpc.so 00:05:46.612 LIB libspdk_env_dpdk.a 00:05:46.612 SO libspdk_env_dpdk.so.14.1 00:05:46.869 CC lib/rpc/rpc.o 00:05:46.869 SYMLINK libspdk_env_dpdk.so 00:05:47.126 LIB libspdk_rpc.a 00:05:47.126 SO libspdk_rpc.so.6.0 00:05:47.126 SYMLINK libspdk_rpc.so 00:05:47.384 CC lib/keyring/keyring.o 00:05:47.384 CC lib/keyring/keyring_rpc.o 00:05:47.384 CC lib/trace/trace_flags.o 00:05:47.384 CC lib/trace/trace.o 00:05:47.384 CC lib/notify/notify_rpc.o 00:05:47.384 CC lib/trace/trace_rpc.o 00:05:47.384 CC lib/notify/notify.o 00:05:47.642 LIB libspdk_keyring.a 00:05:47.642 LIB libspdk_notify.a 00:05:47.642 SO libspdk_keyring.so.1.0 00:05:47.900 SO libspdk_notify.so.6.0 00:05:47.900 LIB libspdk_trace.a 00:05:47.900 SO libspdk_trace.so.10.0 00:05:47.900 SYMLINK libspdk_notify.so 00:05:47.900 SYMLINK libspdk_keyring.so 00:05:47.900 SYMLINK libspdk_trace.so 00:05:48.158 CC lib/thread/iobuf.o 00:05:48.158 CC lib/thread/thread.o 00:05:48.158 CC lib/sock/sock.o 00:05:48.158 CC lib/sock/sock_rpc.o 00:05:48.722 LIB libspdk_sock.a 00:05:48.722 SO libspdk_sock.so.10.0 00:05:48.722 SYMLINK libspdk_sock.so 00:05:49.285 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:49.285 CC lib/nvme/nvme_ctrlr.o 00:05:49.285 CC lib/nvme/nvme_fabric.o 00:05:49.285 CC lib/nvme/nvme_ns_cmd.o 00:05:49.285 CC lib/nvme/nvme_ns.o 00:05:49.285 CC lib/nvme/nvme_qpair.o 00:05:49.285 CC lib/nvme/nvme_pcie.o 00:05:49.285 CC lib/nvme/nvme_pcie_common.o 00:05:49.285 CC lib/nvme/nvme.o 00:05:49.850 CC lib/nvme/nvme_quirks.o 00:05:50.107 CC lib/nvme/nvme_transport.o 00:05:50.107 CC lib/nvme/nvme_discovery.o 00:05:50.107 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:50.107 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:50.107 CC lib/nvme/nvme_tcp.o 00:05:50.363 LIB libspdk_thread.a 00:05:50.363 SO libspdk_thread.so.10.1 00:05:50.363 CC lib/nvme/nvme_opal.o 00:05:50.363 SYMLINK libspdk_thread.so 00:05:50.363 CC lib/nvme/nvme_io_msg.o 00:05:50.621 CC lib/nvme/nvme_poll_group.o 00:05:50.621 CC lib/nvme/nvme_zns.o 00:05:50.900 CC lib/nvme/nvme_stubs.o 00:05:50.900 CC lib/nvme/nvme_auth.o 00:05:50.900 CC lib/nvme/nvme_cuse.o 00:05:50.900 CC lib/nvme/nvme_rdma.o 00:05:51.158 CC lib/accel/accel.o 00:05:51.158 CC lib/accel/accel_rpc.o 00:05:51.423 CC lib/accel/accel_sw.o 00:05:51.683 CC lib/blob/blobstore.o 00:05:51.683 CC lib/blob/request.o 00:05:51.941 CC lib/init/json_config.o 00:05:51.941 CC lib/virtio/virtio.o 00:05:51.941 CC lib/blob/zeroes.o 00:05:51.941 CC lib/virtio/virtio_vhost_user.o 00:05:51.941 CC lib/init/subsystem.o 00:05:52.198 CC lib/init/subsystem_rpc.o 00:05:52.198 CC lib/virtio/virtio_vfio_user.o 00:05:52.198 CC lib/virtio/virtio_pci.o 00:05:52.198 CC lib/blob/blob_bs_dev.o 00:05:52.198 CC lib/init/rpc.o 00:05:52.456 LIB libspdk_init.a 00:05:52.456 LIB libspdk_accel.a 00:05:52.456 SO libspdk_init.so.5.0 00:05:52.456 LIB libspdk_virtio.a 00:05:52.456 SO libspdk_accel.so.15.1 00:05:52.456 LIB libspdk_nvme.a 00:05:52.456 SO libspdk_virtio.so.7.0 00:05:52.456 SYMLINK libspdk_init.so 00:05:52.713 SYMLINK libspdk_virtio.so 00:05:52.713 SYMLINK libspdk_accel.so 00:05:52.713 SO libspdk_nvme.so.13.1 00:05:52.990 CC lib/event/app.o 00:05:52.990 CC lib/event/reactor.o 00:05:52.990 CC lib/event/log_rpc.o 00:05:52.990 CC lib/event/app_rpc.o 00:05:52.990 CC lib/event/scheduler_static.o 00:05:52.990 CC lib/bdev/bdev.o 00:05:52.990 CC lib/bdev/bdev_rpc.o 00:05:52.990 CC lib/bdev/bdev_zone.o 00:05:52.990 CC lib/bdev/part.o 00:05:53.248 CC lib/bdev/scsi_nvme.o 00:05:53.248 SYMLINK libspdk_nvme.so 00:05:53.506 LIB libspdk_event.a 00:05:53.506 SO libspdk_event.so.14.0 00:05:53.764 SYMLINK libspdk_event.so 00:05:55.664 LIB libspdk_blob.a 00:05:55.664 SO libspdk_blob.so.11.0 00:05:55.664 SYMLINK libspdk_blob.so 00:05:55.922 CC lib/lvol/lvol.o 00:05:55.922 CC lib/blobfs/tree.o 00:05:55.922 CC lib/blobfs/blobfs.o 00:05:56.487 LIB libspdk_bdev.a 00:05:56.487 SO libspdk_bdev.so.15.1 00:05:56.487 SYMLINK libspdk_bdev.so 00:05:56.745 CC lib/ublk/ublk.o 00:05:56.745 CC lib/scsi/dev.o 00:05:56.745 CC lib/scsi/lun.o 00:05:56.745 CC lib/scsi/port.o 00:05:56.745 CC lib/ublk/ublk_rpc.o 00:05:56.745 CC lib/nvmf/ctrlr.o 00:05:56.745 CC lib/nbd/nbd.o 00:05:56.745 CC lib/ftl/ftl_core.o 00:05:57.003 LIB libspdk_blobfs.a 00:05:57.003 LIB libspdk_lvol.a 00:05:57.003 SO libspdk_blobfs.so.10.0 00:05:57.003 CC lib/nbd/nbd_rpc.o 00:05:57.260 CC lib/ftl/ftl_init.o 00:05:57.260 SO libspdk_lvol.so.10.0 00:05:57.260 SYMLINK libspdk_blobfs.so 00:05:57.260 CC lib/scsi/scsi.o 00:05:57.260 SYMLINK libspdk_lvol.so 00:05:57.260 CC lib/scsi/scsi_bdev.o 00:05:57.260 CC lib/scsi/scsi_pr.o 00:05:57.260 CC lib/ftl/ftl_layout.o 00:05:57.518 CC lib/ftl/ftl_debug.o 00:05:57.518 CC lib/ftl/ftl_io.o 00:05:57.518 CC lib/ftl/ftl_sb.o 00:05:57.518 LIB libspdk_nbd.a 00:05:57.518 CC lib/ftl/ftl_l2p.o 00:05:57.518 SO libspdk_nbd.so.7.0 00:05:57.518 SYMLINK libspdk_nbd.so 00:05:57.518 CC lib/ftl/ftl_l2p_flat.o 00:05:57.775 CC lib/scsi/scsi_rpc.o 00:05:57.775 CC lib/ftl/ftl_nv_cache.o 00:05:57.775 CC lib/ftl/ftl_band.o 00:05:57.775 LIB libspdk_ublk.a 00:05:57.775 CC lib/ftl/ftl_band_ops.o 00:05:57.775 CC lib/scsi/task.o 00:05:57.775 SO libspdk_ublk.so.3.0 00:05:57.775 CC lib/ftl/ftl_writer.o 00:05:57.775 CC lib/ftl/ftl_rq.o 00:05:58.033 SYMLINK libspdk_ublk.so 00:05:58.033 CC lib/nvmf/ctrlr_discovery.o 00:05:58.033 CC lib/ftl/ftl_reloc.o 00:05:58.033 CC lib/ftl/ftl_l2p_cache.o 00:05:58.033 LIB libspdk_scsi.a 00:05:58.033 SO libspdk_scsi.so.9.0 00:05:58.291 CC lib/nvmf/ctrlr_bdev.o 00:05:58.291 CC lib/nvmf/subsystem.o 00:05:58.291 CC lib/nvmf/nvmf.o 00:05:58.291 CC lib/nvmf/nvmf_rpc.o 00:05:58.291 SYMLINK libspdk_scsi.so 00:05:58.291 CC lib/nvmf/transport.o 00:05:58.549 CC lib/ftl/ftl_p2l.o 00:05:58.806 CC lib/ftl/mngt/ftl_mngt.o 00:05:58.806 CC lib/iscsi/conn.o 00:05:58.806 CC lib/vhost/vhost.o 00:05:59.063 CC lib/nvmf/tcp.o 00:05:59.063 CC lib/vhost/vhost_rpc.o 00:05:59.063 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:59.063 CC lib/iscsi/init_grp.o 00:05:59.319 CC lib/iscsi/iscsi.o 00:05:59.319 CC lib/vhost/vhost_scsi.o 00:05:59.319 CC lib/vhost/vhost_blk.o 00:05:59.576 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:59.576 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:59.576 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:59.835 CC lib/vhost/rte_vhost_user.o 00:05:59.835 CC lib/nvmf/stubs.o 00:05:59.835 CC lib/nvmf/mdns_server.o 00:05:59.835 CC lib/nvmf/rdma.o 00:06:00.092 CC lib/iscsi/md5.o 00:06:00.092 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:00.358 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:00.358 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:00.358 CC lib/iscsi/param.o 00:06:00.358 CC lib/iscsi/portal_grp.o 00:06:00.358 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:00.643 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:00.643 CC lib/nvmf/auth.o 00:06:00.643 CC lib/iscsi/tgt_node.o 00:06:00.643 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:00.643 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:00.643 CC lib/iscsi/iscsi_subsystem.o 00:06:00.902 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:00.902 CC lib/ftl/utils/ftl_conf.o 00:06:00.902 CC lib/ftl/utils/ftl_md.o 00:06:00.902 LIB libspdk_vhost.a 00:06:00.902 CC lib/iscsi/iscsi_rpc.o 00:06:01.159 CC lib/ftl/utils/ftl_mempool.o 00:06:01.159 SO libspdk_vhost.so.8.0 00:06:01.159 CC lib/iscsi/task.o 00:06:01.159 CC lib/ftl/utils/ftl_bitmap.o 00:06:01.159 CC lib/ftl/utils/ftl_property.o 00:06:01.159 SYMLINK libspdk_vhost.so 00:06:01.159 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:01.159 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:01.159 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:01.416 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:01.416 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:01.416 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:01.416 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:01.416 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:01.416 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:01.416 LIB libspdk_iscsi.a 00:06:01.416 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:01.416 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:01.416 CC lib/ftl/base/ftl_base_dev.o 00:06:01.674 CC lib/ftl/base/ftl_base_bdev.o 00:06:01.674 CC lib/ftl/ftl_trace.o 00:06:01.674 SO libspdk_iscsi.so.8.0 00:06:01.931 SYMLINK libspdk_iscsi.so 00:06:01.931 LIB libspdk_ftl.a 00:06:02.189 SO libspdk_ftl.so.9.0 00:06:02.189 LIB libspdk_nvmf.a 00:06:02.446 SO libspdk_nvmf.so.19.0 00:06:02.446 SYMLINK libspdk_ftl.so 00:06:02.704 SYMLINK libspdk_nvmf.so 00:06:02.961 CC module/env_dpdk/env_dpdk_rpc.o 00:06:03.217 CC module/accel/ioat/accel_ioat.o 00:06:03.217 CC module/blob/bdev/blob_bdev.o 00:06:03.217 CC module/accel/error/accel_error.o 00:06:03.217 CC module/accel/dsa/accel_dsa.o 00:06:03.217 CC module/accel/iaa/accel_iaa.o 00:06:03.217 CC module/keyring/file/keyring.o 00:06:03.217 CC module/sock/posix/posix.o 00:06:03.217 CC module/keyring/linux/keyring.o 00:06:03.217 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:03.217 LIB libspdk_env_dpdk_rpc.a 00:06:03.217 SO libspdk_env_dpdk_rpc.so.6.0 00:06:03.475 SYMLINK libspdk_env_dpdk_rpc.so 00:06:03.475 CC module/keyring/linux/keyring_rpc.o 00:06:03.475 CC module/accel/iaa/accel_iaa_rpc.o 00:06:03.475 CC module/keyring/file/keyring_rpc.o 00:06:03.475 CC module/accel/error/accel_error_rpc.o 00:06:03.475 CC module/accel/ioat/accel_ioat_rpc.o 00:06:03.475 LIB libspdk_scheduler_dynamic.a 00:06:03.475 LIB libspdk_blob_bdev.a 00:06:03.475 SO libspdk_scheduler_dynamic.so.4.0 00:06:03.475 CC module/accel/dsa/accel_dsa_rpc.o 00:06:03.475 SO libspdk_blob_bdev.so.11.0 00:06:03.475 LIB libspdk_accel_iaa.a 00:06:03.475 LIB libspdk_keyring_linux.a 00:06:03.475 SO libspdk_keyring_linux.so.1.0 00:06:03.475 SYMLINK libspdk_scheduler_dynamic.so 00:06:03.475 LIB libspdk_keyring_file.a 00:06:03.475 SO libspdk_accel_iaa.so.3.0 00:06:03.475 LIB libspdk_accel_error.a 00:06:03.475 LIB libspdk_accel_ioat.a 00:06:03.752 SYMLINK libspdk_blob_bdev.so 00:06:03.752 SO libspdk_keyring_file.so.1.0 00:06:03.752 SO libspdk_accel_error.so.2.0 00:06:03.752 SO libspdk_accel_ioat.so.6.0 00:06:03.752 SYMLINK libspdk_keyring_linux.so 00:06:03.752 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:03.752 SYMLINK libspdk_accel_iaa.so 00:06:03.752 LIB libspdk_accel_dsa.a 00:06:03.752 SYMLINK libspdk_accel_error.so 00:06:03.752 SYMLINK libspdk_accel_ioat.so 00:06:03.752 SYMLINK libspdk_keyring_file.so 00:06:03.752 SO libspdk_accel_dsa.so.5.0 00:06:03.752 CC module/scheduler/gscheduler/gscheduler.o 00:06:03.752 SYMLINK libspdk_accel_dsa.so 00:06:03.752 LIB libspdk_scheduler_dpdk_governor.a 00:06:03.752 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:04.009 CC module/bdev/delay/vbdev_delay.o 00:06:04.009 CC module/bdev/lvol/vbdev_lvol.o 00:06:04.009 CC module/bdev/malloc/bdev_malloc.o 00:06:04.009 CC module/bdev/gpt/gpt.o 00:06:04.009 LIB libspdk_scheduler_gscheduler.a 00:06:04.009 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:04.009 CC module/bdev/gpt/vbdev_gpt.o 00:06:04.009 CC module/bdev/error/vbdev_error.o 00:06:04.009 LIB libspdk_sock_posix.a 00:06:04.009 CC module/blobfs/bdev/blobfs_bdev.o 00:06:04.009 SO libspdk_scheduler_gscheduler.so.4.0 00:06:04.009 CC module/bdev/null/bdev_null.o 00:06:04.009 SO libspdk_sock_posix.so.6.0 00:06:04.009 SYMLINK libspdk_scheduler_gscheduler.so 00:06:04.009 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:04.009 SYMLINK libspdk_sock_posix.so 00:06:04.009 CC module/bdev/error/vbdev_error_rpc.o 00:06:04.267 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:04.267 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:04.267 LIB libspdk_bdev_gpt.a 00:06:04.267 LIB libspdk_blobfs_bdev.a 00:06:04.267 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:04.267 CC module/bdev/null/bdev_null_rpc.o 00:06:04.267 SO libspdk_bdev_gpt.so.6.0 00:06:04.267 LIB libspdk_bdev_error.a 00:06:04.267 SO libspdk_blobfs_bdev.so.6.0 00:06:04.267 SO libspdk_bdev_error.so.6.0 00:06:04.530 SYMLINK libspdk_blobfs_bdev.so 00:06:04.530 SYMLINK libspdk_bdev_gpt.so 00:06:04.530 LIB libspdk_bdev_delay.a 00:06:04.530 SYMLINK libspdk_bdev_error.so 00:06:04.530 SO libspdk_bdev_delay.so.6.0 00:06:04.530 LIB libspdk_bdev_null.a 00:06:04.530 LIB libspdk_bdev_malloc.a 00:06:04.530 SO libspdk_bdev_null.so.6.0 00:06:04.530 SO libspdk_bdev_malloc.so.6.0 00:06:04.530 SYMLINK libspdk_bdev_delay.so 00:06:04.530 LIB libspdk_bdev_lvol.a 00:06:04.787 CC module/bdev/nvme/bdev_nvme.o 00:06:04.787 SYMLINK libspdk_bdev_null.so 00:06:04.787 SO libspdk_bdev_lvol.so.6.0 00:06:04.787 CC module/bdev/passthru/vbdev_passthru.o 00:06:04.787 SYMLINK libspdk_bdev_malloc.so 00:06:04.787 CC module/bdev/split/vbdev_split.o 00:06:04.787 CC module/bdev/raid/bdev_raid.o 00:06:04.787 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:04.787 CC module/bdev/aio/bdev_aio.o 00:06:04.787 SYMLINK libspdk_bdev_lvol.so 00:06:04.787 CC module/bdev/aio/bdev_aio_rpc.o 00:06:04.787 CC module/bdev/ftl/bdev_ftl.o 00:06:04.787 CC module/bdev/iscsi/bdev_iscsi.o 00:06:05.044 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:05.044 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:05.044 CC module/bdev/split/vbdev_split_rpc.o 00:06:05.044 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:05.302 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:05.302 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:05.302 LIB libspdk_bdev_split.a 00:06:05.302 LIB libspdk_bdev_aio.a 00:06:05.302 LIB libspdk_bdev_passthru.a 00:06:05.302 SO libspdk_bdev_split.so.6.0 00:06:05.302 SO libspdk_bdev_passthru.so.6.0 00:06:05.302 SO libspdk_bdev_aio.so.6.0 00:06:05.302 LIB libspdk_bdev_iscsi.a 00:06:05.302 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:05.302 SYMLINK libspdk_bdev_split.so 00:06:05.302 SO libspdk_bdev_iscsi.so.6.0 00:06:05.302 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:05.302 SYMLINK libspdk_bdev_passthru.so 00:06:05.302 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:05.302 LIB libspdk_bdev_ftl.a 00:06:05.560 SYMLINK libspdk_bdev_aio.so 00:06:05.560 CC module/bdev/nvme/nvme_rpc.o 00:06:05.560 SYMLINK libspdk_bdev_iscsi.so 00:06:05.560 CC module/bdev/nvme/bdev_mdns_client.o 00:06:05.560 CC module/bdev/nvme/vbdev_opal.o 00:06:05.560 SO libspdk_bdev_ftl.so.6.0 00:06:05.560 LIB libspdk_bdev_zone_block.a 00:06:05.560 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:05.560 SO libspdk_bdev_zone_block.so.6.0 00:06:05.560 SYMLINK libspdk_bdev_ftl.so 00:06:05.560 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:05.560 LIB libspdk_bdev_virtio.a 00:06:05.560 SYMLINK libspdk_bdev_zone_block.so 00:06:05.560 CC module/bdev/raid/bdev_raid_rpc.o 00:06:05.560 SO libspdk_bdev_virtio.so.6.0 00:06:05.817 CC module/bdev/raid/bdev_raid_sb.o 00:06:05.817 SYMLINK libspdk_bdev_virtio.so 00:06:05.817 CC module/bdev/raid/raid0.o 00:06:05.817 CC module/bdev/raid/raid1.o 00:06:05.817 CC module/bdev/raid/concat.o 00:06:06.076 LIB libspdk_bdev_raid.a 00:06:06.076 SO libspdk_bdev_raid.so.6.0 00:06:06.335 SYMLINK libspdk_bdev_raid.so 00:06:06.902 LIB libspdk_bdev_nvme.a 00:06:07.199 SO libspdk_bdev_nvme.so.7.0 00:06:07.199 SYMLINK libspdk_bdev_nvme.so 00:06:07.766 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:07.766 CC module/event/subsystems/keyring/keyring.o 00:06:07.766 CC module/event/subsystems/scheduler/scheduler.o 00:06:07.766 CC module/event/subsystems/iobuf/iobuf.o 00:06:07.766 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:07.766 CC module/event/subsystems/vmd/vmd.o 00:06:07.766 CC module/event/subsystems/sock/sock.o 00:06:07.766 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:08.025 LIB libspdk_event_keyring.a 00:06:08.025 LIB libspdk_event_sock.a 00:06:08.025 LIB libspdk_event_vhost_blk.a 00:06:08.025 LIB libspdk_event_vmd.a 00:06:08.025 SO libspdk_event_keyring.so.1.0 00:06:08.025 LIB libspdk_event_scheduler.a 00:06:08.025 SO libspdk_event_sock.so.5.0 00:06:08.025 LIB libspdk_event_iobuf.a 00:06:08.025 SO libspdk_event_vhost_blk.so.3.0 00:06:08.025 SO libspdk_event_scheduler.so.4.0 00:06:08.025 SO libspdk_event_vmd.so.6.0 00:06:08.025 SYMLINK libspdk_event_keyring.so 00:06:08.025 SYMLINK libspdk_event_sock.so 00:06:08.025 SO libspdk_event_iobuf.so.3.0 00:06:08.025 SYMLINK libspdk_event_vhost_blk.so 00:06:08.025 SYMLINK libspdk_event_scheduler.so 00:06:08.025 SYMLINK libspdk_event_vmd.so 00:06:08.283 SYMLINK libspdk_event_iobuf.so 00:06:08.541 CC module/event/subsystems/accel/accel.o 00:06:08.541 LIB libspdk_event_accel.a 00:06:08.799 SO libspdk_event_accel.so.6.0 00:06:08.799 SYMLINK libspdk_event_accel.so 00:06:09.057 CC module/event/subsystems/bdev/bdev.o 00:06:09.313 LIB libspdk_event_bdev.a 00:06:09.313 SO libspdk_event_bdev.so.6.0 00:06:09.313 SYMLINK libspdk_event_bdev.so 00:06:09.570 CC module/event/subsystems/ublk/ublk.o 00:06:09.570 CC module/event/subsystems/scsi/scsi.o 00:06:09.570 CC module/event/subsystems/nbd/nbd.o 00:06:09.570 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:09.570 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:09.827 LIB libspdk_event_ublk.a 00:06:09.827 LIB libspdk_event_nbd.a 00:06:09.827 SO libspdk_event_ublk.so.3.0 00:06:09.827 LIB libspdk_event_scsi.a 00:06:09.827 SO libspdk_event_nbd.so.6.0 00:06:09.827 SO libspdk_event_scsi.so.6.0 00:06:09.827 SYMLINK libspdk_event_ublk.so 00:06:10.126 SYMLINK libspdk_event_nbd.so 00:06:10.126 SYMLINK libspdk_event_scsi.so 00:06:10.126 LIB libspdk_event_nvmf.a 00:06:10.126 SO libspdk_event_nvmf.so.6.0 00:06:10.126 SYMLINK libspdk_event_nvmf.so 00:06:10.384 CC module/event/subsystems/iscsi/iscsi.o 00:06:10.384 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:10.384 LIB libspdk_event_iscsi.a 00:06:10.384 LIB libspdk_event_vhost_scsi.a 00:06:10.641 SO libspdk_event_iscsi.so.6.0 00:06:10.641 SO libspdk_event_vhost_scsi.so.3.0 00:06:10.641 SYMLINK libspdk_event_vhost_scsi.so 00:06:10.641 SYMLINK libspdk_event_iscsi.so 00:06:10.641 SO libspdk.so.6.0 00:06:10.899 SYMLINK libspdk.so 00:06:11.158 CXX app/trace/trace.o 00:06:11.158 TEST_HEADER include/spdk/accel.h 00:06:11.158 TEST_HEADER include/spdk/accel_module.h 00:06:11.158 CC app/trace_record/trace_record.o 00:06:11.158 TEST_HEADER include/spdk/assert.h 00:06:11.158 TEST_HEADER include/spdk/barrier.h 00:06:11.158 TEST_HEADER include/spdk/base64.h 00:06:11.158 TEST_HEADER include/spdk/bdev.h 00:06:11.158 TEST_HEADER include/spdk/bdev_module.h 00:06:11.158 TEST_HEADER include/spdk/bdev_zone.h 00:06:11.158 TEST_HEADER include/spdk/bit_array.h 00:06:11.158 TEST_HEADER include/spdk/bit_pool.h 00:06:11.158 TEST_HEADER include/spdk/blob_bdev.h 00:06:11.158 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:11.158 TEST_HEADER include/spdk/blobfs.h 00:06:11.158 TEST_HEADER include/spdk/blob.h 00:06:11.158 TEST_HEADER include/spdk/conf.h 00:06:11.158 TEST_HEADER include/spdk/config.h 00:06:11.158 TEST_HEADER include/spdk/cpuset.h 00:06:11.158 TEST_HEADER include/spdk/crc16.h 00:06:11.158 TEST_HEADER include/spdk/crc32.h 00:06:11.158 TEST_HEADER include/spdk/crc64.h 00:06:11.158 TEST_HEADER include/spdk/dif.h 00:06:11.158 TEST_HEADER include/spdk/dma.h 00:06:11.158 TEST_HEADER include/spdk/endian.h 00:06:11.158 TEST_HEADER include/spdk/env_dpdk.h 00:06:11.158 TEST_HEADER include/spdk/env.h 00:06:11.158 TEST_HEADER include/spdk/event.h 00:06:11.158 TEST_HEADER include/spdk/fd_group.h 00:06:11.158 TEST_HEADER include/spdk/fd.h 00:06:11.158 TEST_HEADER include/spdk/file.h 00:06:11.158 TEST_HEADER include/spdk/ftl.h 00:06:11.158 TEST_HEADER include/spdk/gpt_spec.h 00:06:11.158 TEST_HEADER include/spdk/hexlify.h 00:06:11.158 CC app/nvmf_tgt/nvmf_main.o 00:06:11.158 TEST_HEADER include/spdk/histogram_data.h 00:06:11.158 TEST_HEADER include/spdk/idxd.h 00:06:11.158 TEST_HEADER include/spdk/idxd_spec.h 00:06:11.158 TEST_HEADER include/spdk/init.h 00:06:11.158 TEST_HEADER include/spdk/ioat.h 00:06:11.158 CC examples/util/zipf/zipf.o 00:06:11.158 TEST_HEADER include/spdk/ioat_spec.h 00:06:11.158 TEST_HEADER include/spdk/iscsi_spec.h 00:06:11.158 TEST_HEADER include/spdk/json.h 00:06:11.158 CC examples/ioat/perf/perf.o 00:06:11.158 TEST_HEADER include/spdk/jsonrpc.h 00:06:11.158 TEST_HEADER include/spdk/keyring.h 00:06:11.158 TEST_HEADER include/spdk/keyring_module.h 00:06:11.158 TEST_HEADER include/spdk/likely.h 00:06:11.158 TEST_HEADER include/spdk/log.h 00:06:11.158 TEST_HEADER include/spdk/lvol.h 00:06:11.158 CC test/thread/poller_perf/poller_perf.o 00:06:11.158 TEST_HEADER include/spdk/memory.h 00:06:11.158 TEST_HEADER include/spdk/mmio.h 00:06:11.158 TEST_HEADER include/spdk/nbd.h 00:06:11.158 TEST_HEADER include/spdk/notify.h 00:06:11.158 TEST_HEADER include/spdk/nvme.h 00:06:11.158 TEST_HEADER include/spdk/nvme_intel.h 00:06:11.158 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:11.158 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:11.158 TEST_HEADER include/spdk/nvme_spec.h 00:06:11.158 TEST_HEADER include/spdk/nvme_zns.h 00:06:11.158 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:11.158 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:11.158 CC test/dma/test_dma/test_dma.o 00:06:11.158 TEST_HEADER include/spdk/nvmf.h 00:06:11.158 TEST_HEADER include/spdk/nvmf_spec.h 00:06:11.416 TEST_HEADER include/spdk/nvmf_transport.h 00:06:11.416 TEST_HEADER include/spdk/opal.h 00:06:11.416 TEST_HEADER include/spdk/opal_spec.h 00:06:11.416 CC test/app/bdev_svc/bdev_svc.o 00:06:11.416 TEST_HEADER include/spdk/pci_ids.h 00:06:11.416 TEST_HEADER include/spdk/pipe.h 00:06:11.416 TEST_HEADER include/spdk/queue.h 00:06:11.416 TEST_HEADER include/spdk/reduce.h 00:06:11.416 TEST_HEADER include/spdk/rpc.h 00:06:11.416 TEST_HEADER include/spdk/scheduler.h 00:06:11.416 TEST_HEADER include/spdk/scsi.h 00:06:11.416 TEST_HEADER include/spdk/scsi_spec.h 00:06:11.416 TEST_HEADER include/spdk/sock.h 00:06:11.416 TEST_HEADER include/spdk/stdinc.h 00:06:11.416 TEST_HEADER include/spdk/string.h 00:06:11.416 TEST_HEADER include/spdk/thread.h 00:06:11.416 TEST_HEADER include/spdk/trace.h 00:06:11.416 TEST_HEADER include/spdk/trace_parser.h 00:06:11.416 TEST_HEADER include/spdk/tree.h 00:06:11.416 TEST_HEADER include/spdk/ublk.h 00:06:11.416 TEST_HEADER include/spdk/util.h 00:06:11.416 TEST_HEADER include/spdk/uuid.h 00:06:11.416 TEST_HEADER include/spdk/version.h 00:06:11.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:11.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:11.416 TEST_HEADER include/spdk/vhost.h 00:06:11.416 TEST_HEADER include/spdk/vmd.h 00:06:11.416 TEST_HEADER include/spdk/xor.h 00:06:11.416 TEST_HEADER include/spdk/zipf.h 00:06:11.416 CXX test/cpp_headers/accel.o 00:06:11.416 LINK poller_perf 00:06:11.416 LINK zipf 00:06:11.416 LINK ioat_perf 00:06:11.416 LINK spdk_trace_record 00:06:11.416 LINK nvmf_tgt 00:06:11.675 LINK bdev_svc 00:06:11.675 CXX test/cpp_headers/accel_module.o 00:06:11.675 CXX test/cpp_headers/assert.o 00:06:11.675 CXX test/cpp_headers/barrier.o 00:06:11.675 LINK test_dma 00:06:11.675 CXX test/cpp_headers/base64.o 00:06:11.933 LINK spdk_trace 00:06:11.933 CXX test/cpp_headers/bdev.o 00:06:11.933 CC examples/ioat/verify/verify.o 00:06:11.933 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:11.933 CC app/iscsi_tgt/iscsi_tgt.o 00:06:12.191 CC examples/sock/hello_world/hello_sock.o 00:06:12.191 CXX test/cpp_headers/bdev_module.o 00:06:12.191 CC examples/thread/thread/thread_ex.o 00:06:12.191 LINK verify 00:06:12.191 LINK interrupt_tgt 00:06:12.450 CC examples/vmd/lsvmd/lsvmd.o 00:06:12.450 LINK iscsi_tgt 00:06:12.450 CXX test/cpp_headers/bdev_zone.o 00:06:12.450 LINK hello_sock 00:06:12.450 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:12.450 LINK thread 00:06:12.709 CC examples/vmd/led/led.o 00:06:12.709 CXX test/cpp_headers/bit_array.o 00:06:12.709 CXX test/cpp_headers/bit_pool.o 00:06:12.709 LINK lsvmd 00:06:12.709 CXX test/cpp_headers/blob_bdev.o 00:06:12.999 CXX test/cpp_headers/blobfs_bdev.o 00:06:12.999 LINK led 00:06:12.999 CC test/app/histogram_perf/histogram_perf.o 00:06:12.999 CXX test/cpp_headers/blobfs.o 00:06:12.999 CC test/app/jsoncat/jsoncat.o 00:06:12.999 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:12.999 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:13.000 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:13.257 LINK histogram_perf 00:06:13.257 LINK jsoncat 00:06:13.257 LINK nvme_fuzz 00:06:13.257 CXX test/cpp_headers/blob.o 00:06:13.515 CC app/spdk_tgt/spdk_tgt.o 00:06:13.515 CC app/spdk_lspci/spdk_lspci.o 00:06:13.515 CC examples/idxd/perf/perf.o 00:06:13.515 CXX test/cpp_headers/conf.o 00:06:13.515 CC test/app/stub/stub.o 00:06:13.515 LINK vhost_fuzz 00:06:13.515 LINK spdk_lspci 00:06:13.773 LINK spdk_tgt 00:06:13.773 CXX test/cpp_headers/config.o 00:06:13.773 CXX test/cpp_headers/cpuset.o 00:06:13.773 LINK stub 00:06:13.773 CC examples/accel/perf/accel_perf.o 00:06:13.773 LINK idxd_perf 00:06:13.773 CC app/spdk_nvme_perf/perf.o 00:06:14.031 CXX test/cpp_headers/crc16.o 00:06:14.031 CC test/env/mem_callbacks/mem_callbacks.o 00:06:14.031 CXX test/cpp_headers/crc32.o 00:06:14.031 CXX test/cpp_headers/crc64.o 00:06:14.031 CC test/event/event_perf/event_perf.o 00:06:14.290 CXX test/cpp_headers/dif.o 00:06:14.290 LINK event_perf 00:06:14.290 CC examples/blob/hello_world/hello_blob.o 00:06:14.290 LINK accel_perf 00:06:14.290 CC test/event/reactor/reactor.o 00:06:14.548 CC examples/nvme/hello_world/hello_world.o 00:06:14.548 CXX test/cpp_headers/dma.o 00:06:14.548 LINK reactor 00:06:14.808 LINK mem_callbacks 00:06:14.808 LINK hello_blob 00:06:14.808 CC examples/nvme/reconnect/reconnect.o 00:06:14.808 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:14.808 LINK hello_world 00:06:14.808 CXX test/cpp_headers/endian.o 00:06:14.808 LINK spdk_nvme_perf 00:06:15.067 CC test/event/reactor_perf/reactor_perf.o 00:06:15.067 CC test/env/vtophys/vtophys.o 00:06:15.067 CC examples/blob/cli/blobcli.o 00:06:15.325 LINK reconnect 00:06:15.325 CXX test/cpp_headers/env_dpdk.o 00:06:15.325 LINK reactor_perf 00:06:15.325 LINK vtophys 00:06:15.325 LINK nvme_manage 00:06:15.325 CC examples/bdev/hello_world/hello_bdev.o 00:06:15.325 CXX test/cpp_headers/env.o 00:06:15.652 CC app/spdk_nvme_identify/identify.o 00:06:15.652 LINK iscsi_fuzz 00:06:15.652 CC examples/nvme/arbitration/arbitration.o 00:06:15.652 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:15.652 CXX test/cpp_headers/event.o 00:06:15.652 CC test/event/app_repeat/app_repeat.o 00:06:15.652 LINK hello_bdev 00:06:15.652 LINK blobcli 00:06:15.910 CC examples/nvme/hotplug/hotplug.o 00:06:15.910 LINK env_dpdk_post_init 00:06:15.911 CXX test/cpp_headers/fd_group.o 00:06:16.168 LINK app_repeat 00:06:16.168 LINK arbitration 00:06:16.168 CXX test/cpp_headers/fd.o 00:06:16.168 CXX test/cpp_headers/file.o 00:06:16.425 CC examples/bdev/bdevperf/bdevperf.o 00:06:16.425 LINK hotplug 00:06:16.425 CXX test/cpp_headers/ftl.o 00:06:16.683 CC test/env/memory/memory_ut.o 00:06:16.683 CC app/spdk_nvme_discover/discovery_aer.o 00:06:16.683 CC test/event/scheduler/scheduler.o 00:06:16.683 CC app/spdk_top/spdk_top.o 00:06:16.683 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:16.683 CXX test/cpp_headers/gpt_spec.o 00:06:16.683 LINK spdk_nvme_identify 00:06:16.942 LINK spdk_nvme_discover 00:06:16.942 LINK cmb_copy 00:06:16.942 CXX test/cpp_headers/hexlify.o 00:06:16.942 LINK scheduler 00:06:16.942 CC examples/nvme/abort/abort.o 00:06:16.942 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:17.201 CXX test/cpp_headers/histogram_data.o 00:06:17.201 CC test/env/pci/pci_ut.o 00:06:17.201 LINK bdevperf 00:06:17.201 LINK pmr_persistence 00:06:17.459 CXX test/cpp_headers/idxd.o 00:06:17.459 CC test/nvme/aer/aer.o 00:06:17.459 CC app/vhost/vhost.o 00:06:17.459 LINK abort 00:06:17.717 LINK spdk_top 00:06:17.717 CXX test/cpp_headers/idxd_spec.o 00:06:17.717 LINK vhost 00:06:17.717 LINK pci_ut 00:06:17.717 CC app/spdk_dd/spdk_dd.o 00:06:17.717 LINK aer 00:06:17.975 LINK memory_ut 00:06:17.975 CXX test/cpp_headers/init.o 00:06:17.975 CC app/fio/nvme/fio_plugin.o 00:06:17.975 CC test/rpc_client/rpc_client_test.o 00:06:17.975 CXX test/cpp_headers/ioat.o 00:06:17.975 CC app/fio/bdev/fio_plugin.o 00:06:17.975 LINK spdk_dd 00:06:17.975 CC test/accel/dif/dif.o 00:06:18.234 LINK rpc_client_test 00:06:18.234 CXX test/cpp_headers/ioat_spec.o 00:06:18.234 CC test/nvme/reset/reset.o 00:06:18.234 CXX test/cpp_headers/iscsi_spec.o 00:06:18.234 CXX test/cpp_headers/json.o 00:06:18.234 CXX test/cpp_headers/jsonrpc.o 00:06:18.493 CXX test/cpp_headers/keyring.o 00:06:18.493 LINK reset 00:06:18.493 CC examples/nvmf/nvmf/nvmf.o 00:06:18.493 LINK spdk_nvme 00:06:18.493 LINK dif 00:06:18.493 CC test/nvme/sgl/sgl.o 00:06:18.493 CXX test/cpp_headers/keyring_module.o 00:06:18.758 CXX test/cpp_headers/likely.o 00:06:18.758 LINK spdk_bdev 00:06:18.758 CXX test/cpp_headers/log.o 00:06:18.758 CC test/blobfs/mkfs/mkfs.o 00:06:18.758 CXX test/cpp_headers/lvol.o 00:06:18.758 CXX test/cpp_headers/memory.o 00:06:18.758 CXX test/cpp_headers/mmio.o 00:06:18.758 CXX test/cpp_headers/nbd.o 00:06:18.758 LINK nvmf 00:06:19.020 CC test/lvol/esnap/esnap.o 00:06:19.020 LINK sgl 00:06:19.020 CXX test/cpp_headers/notify.o 00:06:19.020 LINK mkfs 00:06:19.020 CXX test/cpp_headers/nvme.o 00:06:19.020 CXX test/cpp_headers/nvme_intel.o 00:06:19.020 CC test/nvme/e2edp/nvme_dp.o 00:06:19.277 CC test/bdev/bdevio/bdevio.o 00:06:19.277 CXX test/cpp_headers/nvme_ocssd.o 00:06:19.277 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:19.277 CXX test/cpp_headers/nvme_spec.o 00:06:19.277 CC test/nvme/overhead/overhead.o 00:06:19.277 CC test/nvme/err_injection/err_injection.o 00:06:19.277 CC test/nvme/startup/startup.o 00:06:19.535 CXX test/cpp_headers/nvme_zns.o 00:06:19.535 LINK nvme_dp 00:06:19.535 LINK err_injection 00:06:19.535 CC test/nvme/reserve/reserve.o 00:06:19.535 CC test/nvme/simple_copy/simple_copy.o 00:06:19.793 CXX test/cpp_headers/nvmf_cmd.o 00:06:19.793 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:19.793 LINK startup 00:06:19.793 LINK overhead 00:06:19.793 CXX test/cpp_headers/nvmf.o 00:06:19.793 LINK bdevio 00:06:19.793 CXX test/cpp_headers/nvmf_spec.o 00:06:20.051 LINK simple_copy 00:06:20.051 CXX test/cpp_headers/nvmf_transport.o 00:06:20.051 LINK reserve 00:06:20.051 CXX test/cpp_headers/opal.o 00:06:20.051 CC test/nvme/connect_stress/connect_stress.o 00:06:20.051 CC test/nvme/boot_partition/boot_partition.o 00:06:20.051 CC test/nvme/compliance/nvme_compliance.o 00:06:20.309 CXX test/cpp_headers/opal_spec.o 00:06:20.309 CC test/nvme/fused_ordering/fused_ordering.o 00:06:20.309 LINK connect_stress 00:06:20.309 LINK boot_partition 00:06:20.309 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:20.309 CXX test/cpp_headers/pci_ids.o 00:06:20.309 CC test/nvme/fdp/fdp.o 00:06:20.309 CC test/nvme/cuse/cuse.o 00:06:20.309 CXX test/cpp_headers/pipe.o 00:06:20.565 LINK nvme_compliance 00:06:20.565 CXX test/cpp_headers/queue.o 00:06:20.565 LINK fused_ordering 00:06:20.565 CXX test/cpp_headers/reduce.o 00:06:20.565 CXX test/cpp_headers/rpc.o 00:06:20.565 CXX test/cpp_headers/scheduler.o 00:06:20.565 CXX test/cpp_headers/scsi.o 00:06:20.822 LINK doorbell_aers 00:06:20.822 CXX test/cpp_headers/scsi_spec.o 00:06:20.822 CXX test/cpp_headers/sock.o 00:06:20.822 CXX test/cpp_headers/stdinc.o 00:06:20.822 LINK fdp 00:06:20.822 CXX test/cpp_headers/string.o 00:06:20.822 CXX test/cpp_headers/thread.o 00:06:20.822 CXX test/cpp_headers/trace.o 00:06:20.822 CXX test/cpp_headers/trace_parser.o 00:06:21.079 CXX test/cpp_headers/tree.o 00:06:21.079 CXX test/cpp_headers/ublk.o 00:06:21.079 CXX test/cpp_headers/util.o 00:06:21.079 CXX test/cpp_headers/uuid.o 00:06:21.079 CXX test/cpp_headers/version.o 00:06:21.079 CXX test/cpp_headers/vfio_user_pci.o 00:06:21.079 CXX test/cpp_headers/vfio_user_spec.o 00:06:21.079 CXX test/cpp_headers/vhost.o 00:06:21.079 CXX test/cpp_headers/vmd.o 00:06:21.079 CXX test/cpp_headers/xor.o 00:06:21.337 CXX test/cpp_headers/zipf.o 00:06:21.900 LINK cuse 00:06:25.176 LINK esnap 00:06:25.434 00:06:25.434 real 1m19.715s 00:06:25.434 user 7m41.708s 00:06:25.434 sys 2m12.744s 00:06:25.434 18:31:59 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:06:25.434 18:31:59 make -- common/autotest_common.sh@10 -- $ set +x 00:06:25.434 ************************************ 00:06:25.434 END TEST make 00:06:25.434 ************************************ 00:06:25.434 18:31:59 -- common/autotest_common.sh@1142 -- $ return 0 00:06:25.434 18:31:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:25.434 18:31:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:25.434 18:31:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:25.434 18:31:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:25.434 18:31:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:25.434 18:31:59 -- pm/common@44 -- $ pid=5199 00:06:25.434 18:31:59 -- pm/common@50 -- $ kill -TERM 5199 00:06:25.434 18:31:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:25.434 18:31:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:25.434 18:31:59 -- pm/common@44 -- $ pid=5200 00:06:25.434 18:31:59 -- pm/common@50 -- $ kill -TERM 5200 00:06:25.434 18:31:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.434 18:31:59 -- nvmf/common.sh@7 -- # uname -s 00:06:25.434 18:31:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.434 18:31:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.434 18:31:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.434 18:31:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.434 18:31:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.434 18:31:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.434 18:31:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.434 18:31:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.434 18:31:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.434 18:31:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.434 18:31:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:06:25.434 18:31:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:06:25.434 18:31:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.434 18:31:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.434 18:31:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:25.434 18:31:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.434 18:31:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.434 18:31:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.434 18:31:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.434 18:31:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.434 18:31:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.434 18:31:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.434 18:31:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.434 18:31:59 -- paths/export.sh@5 -- # export PATH 00:06:25.434 18:31:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.434 18:31:59 -- nvmf/common.sh@47 -- # : 0 00:06:25.434 18:31:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.434 18:31:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.434 18:31:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.434 18:31:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.434 18:31:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.434 18:31:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.434 18:31:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.434 18:31:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.434 18:31:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:25.434 18:31:59 -- spdk/autotest.sh@32 -- # uname -s 00:06:25.434 18:31:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:25.434 18:31:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:25.434 18:31:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:25.434 18:31:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:25.434 18:31:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:25.434 18:31:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:25.434 18:31:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:25.434 18:31:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:25.434 18:31:59 -- spdk/autotest.sh@48 -- # udevadm_pid=54661 00:06:25.434 18:31:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:25.434 18:31:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:25.434 18:31:59 -- pm/common@17 -- # local monitor 00:06:25.434 18:31:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:25.434 18:31:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:25.434 18:31:59 -- pm/common@21 -- # date +%s 00:06:25.434 18:31:59 -- pm/common@25 -- # sleep 1 00:06:25.434 18:31:59 -- pm/common@21 -- # date +%s 00:06:25.434 18:31:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721068319 00:06:25.692 18:31:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721068319 00:06:25.692 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721068319_collect-vmstat.pm.log 00:06:25.692 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721068319_collect-cpu-load.pm.log 00:06:26.625 18:32:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:26.625 18:32:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:26.625 18:32:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:26.625 18:32:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.625 18:32:00 -- spdk/autotest.sh@59 -- # create_test_list 00:06:26.625 18:32:00 -- common/autotest_common.sh@746 -- # xtrace_disable 00:06:26.625 18:32:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.625 18:32:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:26.625 18:32:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:26.625 18:32:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:26.625 18:32:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:26.625 18:32:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:26.625 18:32:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:26.625 18:32:00 -- common/autotest_common.sh@1455 -- # uname 00:06:26.625 18:32:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:26.625 18:32:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:26.625 18:32:00 -- common/autotest_common.sh@1475 -- # uname 00:06:26.625 18:32:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:26.625 18:32:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:26.625 18:32:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:26.625 18:32:00 -- spdk/autotest.sh@72 -- # hash lcov 00:06:26.625 18:32:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:26.625 18:32:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:26.625 --rc lcov_branch_coverage=1 00:06:26.625 --rc lcov_function_coverage=1 00:06:26.625 --rc genhtml_branch_coverage=1 00:06:26.625 --rc genhtml_function_coverage=1 00:06:26.625 --rc genhtml_legend=1 00:06:26.625 --rc geninfo_all_blocks=1 00:06:26.625 ' 00:06:26.625 18:32:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:26.625 --rc lcov_branch_coverage=1 00:06:26.625 --rc lcov_function_coverage=1 00:06:26.625 --rc genhtml_branch_coverage=1 00:06:26.625 --rc genhtml_function_coverage=1 00:06:26.625 --rc genhtml_legend=1 00:06:26.625 --rc geninfo_all_blocks=1 00:06:26.625 ' 00:06:26.625 18:32:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:26.625 --rc lcov_branch_coverage=1 00:06:26.625 --rc lcov_function_coverage=1 00:06:26.625 --rc genhtml_branch_coverage=1 00:06:26.625 --rc genhtml_function_coverage=1 00:06:26.625 --rc genhtml_legend=1 00:06:26.625 --rc geninfo_all_blocks=1 00:06:26.625 --no-external' 00:06:26.625 18:32:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:26.625 --rc lcov_branch_coverage=1 00:06:26.625 --rc lcov_function_coverage=1 00:06:26.625 --rc genhtml_branch_coverage=1 00:06:26.625 --rc genhtml_function_coverage=1 00:06:26.625 --rc genhtml_legend=1 00:06:26.625 --rc geninfo_all_blocks=1 00:06:26.625 --no-external' 00:06:26.625 18:32:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:26.625 lcov: LCOV version 1.14 00:06:26.625 18:32:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:44.795 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:44.795 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:57.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:57.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:57.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:01.212 18:32:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:07:01.212 18:32:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.212 18:32:34 -- common/autotest_common.sh@10 -- # set +x 00:07:01.212 18:32:34 -- spdk/autotest.sh@91 -- # rm -f 00:07:01.212 18:32:34 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:01.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:01.212 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:01.469 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:01.469 18:32:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:07:01.469 18:32:35 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:01.469 18:32:35 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:01.469 18:32:35 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:01.469 18:32:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:01.469 18:32:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:01.469 18:32:35 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:01.469 18:32:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:01.469 18:32:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:01.469 18:32:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:01.469 18:32:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:01.469 18:32:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:01.469 18:32:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:01.469 18:32:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:01.469 18:32:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:01.469 18:32:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:07:01.469 18:32:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:07:01.469 18:32:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:01.469 18:32:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:01.469 18:32:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:01.469 18:32:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:07:01.469 18:32:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:07:01.469 18:32:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:01.469 18:32:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:01.469 18:32:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:07:01.469 18:32:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:01.469 18:32:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:01.469 18:32:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:07:01.469 18:32:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:07:01.469 18:32:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:01.469 No valid GPT data, bailing 00:07:01.469 18:32:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:01.469 18:32:35 -- scripts/common.sh@391 -- # pt= 00:07:01.469 18:32:35 -- scripts/common.sh@392 -- # return 1 00:07:01.469 18:32:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:01.469 1+0 records in 00:07:01.469 1+0 records out 00:07:01.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427823 s, 245 MB/s 00:07:01.469 18:32:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:01.469 18:32:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:01.469 18:32:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:07:01.469 18:32:35 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:07:01.469 18:32:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:01.469 No valid GPT data, bailing 00:07:01.469 18:32:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:01.469 18:32:35 -- scripts/common.sh@391 -- # pt= 00:07:01.469 18:32:35 -- scripts/common.sh@392 -- # return 1 00:07:01.469 18:32:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:01.469 1+0 records in 00:07:01.469 1+0 records out 00:07:01.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445654 s, 235 MB/s 00:07:01.469 18:32:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:01.469 18:32:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:01.469 18:32:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:07:01.469 18:32:35 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:07:01.469 18:32:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:01.469 No valid GPT data, bailing 00:07:01.469 18:32:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:01.469 18:32:35 -- scripts/common.sh@391 -- # pt= 00:07:01.469 18:32:35 -- scripts/common.sh@392 -- # return 1 00:07:01.469 18:32:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:01.727 1+0 records in 00:07:01.727 1+0 records out 00:07:01.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574823 s, 182 MB/s 00:07:01.727 18:32:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:01.727 18:32:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:01.727 18:32:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:07:01.727 18:32:35 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:07:01.727 18:32:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:01.727 No valid GPT data, bailing 00:07:01.727 18:32:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:01.727 18:32:36 -- scripts/common.sh@391 -- # pt= 00:07:01.727 18:32:36 -- scripts/common.sh@392 -- # return 1 00:07:01.727 18:32:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:01.727 1+0 records in 00:07:01.727 1+0 records out 00:07:01.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573755 s, 183 MB/s 00:07:01.727 18:32:36 -- spdk/autotest.sh@118 -- # sync 00:07:01.727 18:32:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:01.727 18:32:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:01.727 18:32:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:03.716 18:32:38 -- spdk/autotest.sh@124 -- # uname -s 00:07:03.716 18:32:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:07:03.716 18:32:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:03.716 18:32:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.716 18:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.716 18:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:03.716 ************************************ 00:07:03.716 START TEST setup.sh 00:07:03.716 ************************************ 00:07:03.716 18:32:38 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:03.973 * Looking for test storage... 00:07:03.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:03.973 18:32:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:07:03.974 18:32:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:07:03.974 18:32:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:03.974 18:32:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.974 18:32:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.974 18:32:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:03.974 ************************************ 00:07:03.974 START TEST acl 00:07:03.974 ************************************ 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:03.974 * Looking for test storage... 00:07:03.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:03.974 18:32:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:03.974 18:32:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:03.974 18:32:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:07:03.974 18:32:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:07:03.974 18:32:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:07:03.974 18:32:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:07:03.974 18:32:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:07:03.974 18:32:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:03.974 18:32:38 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:04.908 18:32:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:07:04.908 18:32:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:07:04.908 18:32:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:04.908 18:32:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:07:04.908 18:32:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:07:04.908 18:32:39 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:05.474 Hugepages 00:07:05.474 node hugesize free / total 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:05.474 00:07:05.474 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:05.474 18:32:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:07:05.769 18:32:40 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:07:05.769 18:32:40 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.769 18:32:40 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.769 18:32:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:05.769 ************************************ 00:07:05.769 START TEST denied 00:07:05.769 ************************************ 00:07:05.769 18:32:40 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:07:05.769 18:32:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:07:05.769 18:32:40 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:07:05.769 18:32:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:07:05.769 18:32:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:07:05.769 18:32:40 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:06.702 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:06.702 18:32:41 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:07.268 00:07:07.268 real 0m1.558s 00:07:07.268 user 0m0.572s 00:07:07.268 sys 0m0.953s 00:07:07.268 18:32:41 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.268 18:32:41 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:07:07.268 ************************************ 00:07:07.268 END TEST denied 00:07:07.268 ************************************ 00:07:07.268 18:32:41 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:07.268 18:32:41 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:07.268 18:32:41 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.268 18:32:41 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.268 18:32:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:07.268 ************************************ 00:07:07.268 START TEST allowed 00:07:07.268 ************************************ 00:07:07.268 18:32:41 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:07:07.268 18:32:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:07:07.268 18:32:41 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:07:07.268 18:32:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:07:07.268 18:32:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:07:07.268 18:32:41 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:08.204 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:08.204 18:32:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:09.137 00:07:09.137 real 0m1.755s 00:07:09.137 user 0m0.714s 00:07:09.137 sys 0m1.060s 00:07:09.137 18:32:43 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.137 ************************************ 00:07:09.137 END TEST allowed 00:07:09.137 ************************************ 00:07:09.137 18:32:43 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:07:09.137 18:32:43 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:09.137 00:07:09.137 real 0m5.284s 00:07:09.137 user 0m2.152s 00:07:09.137 sys 0m3.154s 00:07:09.137 18:32:43 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.137 18:32:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:09.137 ************************************ 00:07:09.137 END TEST acl 00:07:09.137 ************************************ 00:07:09.137 18:32:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:09.137 18:32:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:09.137 18:32:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.137 18:32:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.137 18:32:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:09.137 ************************************ 00:07:09.137 START TEST hugepages 00:07:09.137 ************************************ 00:07:09.137 18:32:43 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:09.395 * Looking for test storage... 00:07:09.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:09.395 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:09.395 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:09.395 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:09.395 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:09.395 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5868400 kB' 'MemAvailable: 7378696 kB' 'Buffers: 2436 kB' 'Cached: 1721928 kB' 'SwapCached: 0 kB' 'Active: 477028 kB' 'Inactive: 1351708 kB' 'Active(anon): 114860 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106032 kB' 'Mapped: 48664 kB' 'Shmem: 10488 kB' 'KReclaimable: 67168 kB' 'Slab: 145252 kB' 'SReclaimable: 67168 kB' 'SUnreclaim: 78084 kB' 'KernelStack: 6412 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 332884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.396 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:09.397 18:32:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:09.397 18:32:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.397 18:32:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.397 18:32:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:09.397 ************************************ 00:07:09.397 START TEST default_setup 00:07:09.397 ************************************ 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:07:09.397 18:32:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:10.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.337 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.337 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7979188 kB' 'MemAvailable: 9489264 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493476 kB' 'Inactive: 1351716 kB' 'Active(anon): 131308 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122468 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144664 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77952 kB' 'KernelStack: 6368 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.337 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.338 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7978688 kB' 'MemAvailable: 9488768 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 492884 kB' 'Inactive: 1351720 kB' 'Active(anon): 130716 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121840 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144664 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77952 kB' 'KernelStack: 6352 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.339 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.340 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.341 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7978688 kB' 'MemAvailable: 9488768 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493120 kB' 'Inactive: 1351720 kB' 'Active(anon): 130952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122076 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144664 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77952 kB' 'KernelStack: 6336 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:10.341 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.341 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.341 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.602 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.603 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:07:10.604 nr_hugepages=1024 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:10.604 resv_hugepages=0 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:10.604 surplus_hugepages=0 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:10.604 anon_hugepages=0 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7978436 kB' 'MemAvailable: 9488516 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493076 kB' 'Inactive: 1351720 kB' 'Active(anon): 130908 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122044 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144660 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77948 kB' 'KernelStack: 6352 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.604 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.605 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7978436 kB' 'MemUsed: 4263532 kB' 'SwapCached: 0 kB' 'Active: 493080 kB' 'Inactive: 1351720 kB' 'Active(anon): 130912 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1724356 kB' 'Mapped: 48688 kB' 'AnonPages: 122048 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66712 kB' 'Slab: 144660 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.606 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:10.607 node0=1024 expecting 1024 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:10.607 00:07:10.607 real 0m1.187s 00:07:10.607 user 0m0.543s 00:07:10.607 sys 0m0.610s 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.607 18:32:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:07:10.607 ************************************ 00:07:10.607 END TEST default_setup 00:07:10.607 ************************************ 00:07:10.607 18:32:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:10.607 18:32:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:10.607 18:32:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.607 18:32:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.607 18:32:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:10.607 ************************************ 00:07:10.607 START TEST per_node_1G_alloc 00:07:10.607 ************************************ 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:10.607 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:11.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.180 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:11.180 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9036604 kB' 'MemAvailable: 10546688 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493668 kB' 'Inactive: 1351724 kB' 'Active(anon): 131500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122576 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144588 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77876 kB' 'KernelStack: 6320 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.180 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.181 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9036612 kB' 'MemAvailable: 10546696 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493488 kB' 'Inactive: 1351724 kB' 'Active(anon): 131320 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144572 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77860 kB' 'KernelStack: 6320 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.182 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.183 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9036612 kB' 'MemAvailable: 10546696 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493040 kB' 'Inactive: 1351724 kB' 'Active(anon): 130872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121964 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144496 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77784 kB' 'KernelStack: 6364 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.184 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.185 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:11.186 nr_hugepages=512 00:07:11.186 resv_hugepages=0 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:11.186 surplus_hugepages=0 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:11.186 anon_hugepages=0 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9036612 kB' 'MemAvailable: 10546696 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 492984 kB' 'Inactive: 1351724 kB' 'Active(anon): 130816 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121932 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144496 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77784 kB' 'KernelStack: 6348 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.186 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.187 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9036612 kB' 'MemUsed: 3205356 kB' 'SwapCached: 0 kB' 'Active: 493200 kB' 'Inactive: 1351724 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1724356 kB' 'Mapped: 48688 kB' 'AnonPages: 122148 kB' 'Shmem: 10464 kB' 'KernelStack: 6332 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66712 kB' 'Slab: 144496 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.188 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:11.189 node0=512 expecting 512 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:11.189 00:07:11.189 real 0m0.589s 00:07:11.189 user 0m0.261s 00:07:11.189 sys 0m0.367s 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.189 18:32:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:11.189 ************************************ 00:07:11.189 END TEST per_node_1G_alloc 00:07:11.189 ************************************ 00:07:11.189 18:32:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:11.189 18:32:45 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:11.189 18:32:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.189 18:32:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.189 18:32:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:11.189 ************************************ 00:07:11.189 START TEST even_2G_alloc 00:07:11.189 ************************************ 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:11.189 18:32:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:11.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.760 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:11.760 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7992028 kB' 'MemAvailable: 9502112 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493196 kB' 'Inactive: 1351724 kB' 'Active(anon): 131028 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122132 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144508 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77796 kB' 'KernelStack: 6340 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.760 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7991776 kB' 'MemAvailable: 9501860 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493144 kB' 'Inactive: 1351724 kB' 'Active(anon): 130976 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122108 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144500 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77788 kB' 'KernelStack: 6352 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.761 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.762 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7992400 kB' 'MemAvailable: 9502484 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493196 kB' 'Inactive: 1351724 kB' 'Active(anon): 131028 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144508 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77796 kB' 'KernelStack: 6368 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.763 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.764 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:11.765 nr_hugepages=1024 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:11.765 resv_hugepages=0 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:11.765 surplus_hugepages=0 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:11.765 anon_hugepages=0 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7992148 kB' 'MemAvailable: 9502232 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493088 kB' 'Inactive: 1351724 kB' 'Active(anon): 130920 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122052 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144508 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77796 kB' 'KernelStack: 6352 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.765 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.766 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7992148 kB' 'MemUsed: 4249820 kB' 'SwapCached: 0 kB' 'Active: 492908 kB' 'Inactive: 1351724 kB' 'Active(anon): 130740 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1724356 kB' 'Mapped: 48692 kB' 'AnonPages: 121912 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66712 kB' 'Slab: 144504 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.767 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:11.768 node0=1024 expecting 1024 00:07:11.768 ************************************ 00:07:11.768 END TEST even_2G_alloc 00:07:11.768 ************************************ 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:11.768 00:07:11.768 real 0m0.584s 00:07:11.768 user 0m0.277s 00:07:11.768 sys 0m0.323s 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.768 18:32:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:12.026 18:32:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:12.027 18:32:46 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:12.027 18:32:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.027 18:32:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.027 18:32:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:12.027 ************************************ 00:07:12.027 START TEST odd_alloc 00:07:12.027 ************************************ 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:12.027 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.286 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.286 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7990204 kB' 'MemAvailable: 9500288 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493408 kB' 'Inactive: 1351724 kB' 'Active(anon): 131240 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122316 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144504 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77792 kB' 'KernelStack: 6324 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7989952 kB' 'MemAvailable: 9500036 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493268 kB' 'Inactive: 1351724 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122228 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144512 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77800 kB' 'KernelStack: 6384 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.550 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:12.551 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7989952 kB' 'MemAvailable: 9500036 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493232 kB' 'Inactive: 1351724 kB' 'Active(anon): 131064 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144508 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77796 kB' 'KernelStack: 6368 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.552 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:12.553 nr_hugepages=1025 00:07:12.553 resv_hugepages=0 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:12.553 surplus_hugepages=0 00:07:12.553 anon_hugepages=0 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7989952 kB' 'MemAvailable: 9500036 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493280 kB' 'Inactive: 1351724 kB' 'Active(anon): 131112 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144500 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77788 kB' 'KernelStack: 6368 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.553 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.554 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7989952 kB' 'MemUsed: 4252016 kB' 'SwapCached: 0 kB' 'Active: 493280 kB' 'Inactive: 1351724 kB' 'Active(anon): 131112 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1724356 kB' 'Mapped: 48692 kB' 'AnonPages: 122192 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66712 kB' 'Slab: 144492 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.555 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:12.556 node0=1025 expecting 1025 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:07:12.556 00:07:12.556 real 0m0.677s 00:07:12.556 user 0m0.317s 00:07:12.556 sys 0m0.365s 00:07:12.556 ************************************ 00:07:12.556 END TEST odd_alloc 00:07:12.556 ************************************ 00:07:12.556 18:32:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.557 18:32:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:12.557 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:12.557 18:32:47 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:12.557 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.557 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.557 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:12.557 ************************************ 00:07:12.557 START TEST custom_alloc 00:07:12.557 ************************************ 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:12.557 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:12.815 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:12.815 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:12.815 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:12.815 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:12.815 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:07:12.815 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:07:12.815 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:12.816 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:13.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.078 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.078 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9035496 kB' 'MemAvailable: 10545580 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493716 kB' 'Inactive: 1351724 kB' 'Active(anon): 131548 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122568 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144524 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77812 kB' 'KernelStack: 6336 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.078 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9035496 kB' 'MemAvailable: 10545580 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493508 kB' 'Inactive: 1351724 kB' 'Active(anon): 131340 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122416 kB' 'Mapped: 48624 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144512 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77800 kB' 'KernelStack: 6380 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9035496 kB' 'MemAvailable: 10545580 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493576 kB' 'Inactive: 1351724 kB' 'Active(anon): 131408 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48624 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144508 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77796 kB' 'KernelStack: 6380 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 349552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.345 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:13.346 nr_hugepages=512 00:07:13.346 resv_hugepages=0 00:07:13.346 surplus_hugepages=0 00:07:13.346 anon_hugepages=0 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9035244 kB' 'MemAvailable: 10545328 kB' 'Buffers: 2436 kB' 'Cached: 1721920 kB' 'SwapCached: 0 kB' 'Active: 493656 kB' 'Inactive: 1351724 kB' 'Active(anon): 131488 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122548 kB' 'Mapped: 48624 kB' 'Shmem: 10464 kB' 'KReclaimable: 66712 kB' 'Slab: 144520 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77808 kB' 'KernelStack: 6396 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.346 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.347 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9035244 kB' 'MemUsed: 3206724 kB' 'SwapCached: 0 kB' 'Active: 493560 kB' 'Inactive: 1351724 kB' 'Active(anon): 131392 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1724356 kB' 'Mapped: 48624 kB' 'AnonPages: 122148 kB' 'Shmem: 10464 kB' 'KernelStack: 6364 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66712 kB' 'Slab: 144520 kB' 'SReclaimable: 66712 kB' 'SUnreclaim: 77808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:13.349 node0=512 expecting 512 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:13.349 00:07:13.349 real 0m0.597s 00:07:13.349 user 0m0.273s 00:07:13.349 sys 0m0.339s 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.349 18:32:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:13.349 ************************************ 00:07:13.349 END TEST custom_alloc 00:07:13.349 ************************************ 00:07:13.349 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:13.349 18:32:47 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:13.349 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.349 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.349 18:32:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:13.349 ************************************ 00:07:13.349 START TEST no_shrink_alloc 00:07:13.349 ************************************ 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:13.349 18:32:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:13.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.625 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.625 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986164 kB' 'MemAvailable: 9496252 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489044 kB' 'Inactive: 1351728 kB' 'Active(anon): 126876 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117984 kB' 'Mapped: 47980 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144420 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77712 kB' 'KernelStack: 6260 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.889 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.890 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986164 kB' 'MemAvailable: 9496252 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489268 kB' 'Inactive: 1351728 kB' 'Active(anon): 127100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118208 kB' 'Mapped: 47980 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144420 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77712 kB' 'KernelStack: 6244 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.891 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:13.892 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986164 kB' 'MemAvailable: 9496252 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489136 kB' 'Inactive: 1351728 kB' 'Active(anon): 126968 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118080 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144408 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77700 kB' 'KernelStack: 6256 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.893 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:13.894 nr_hugepages=1024 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:13.894 resv_hugepages=0 00:07:13.894 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:13.894 surplus_hugepages=0 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:13.895 anon_hugepages=0 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986164 kB' 'MemAvailable: 9496252 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489116 kB' 'Inactive: 1351728 kB' 'Active(anon): 126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118060 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144404 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77696 kB' 'KernelStack: 6272 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.895 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986164 kB' 'MemUsed: 4255804 kB' 'SwapCached: 0 kB' 'Active: 489000 kB' 'Inactive: 1351728 kB' 'Active(anon): 126832 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1724360 kB' 'Mapped: 47952 kB' 'AnonPages: 117944 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66708 kB' 'Slab: 144404 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:13.896 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.897 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:13.898 node0=1024 expecting 1024 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:13.898 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:14.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:14.467 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.467 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.467 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986212 kB' 'MemAvailable: 9496300 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489352 kB' 'Inactive: 1351728 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118260 kB' 'Mapped: 48020 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144340 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77632 kB' 'KernelStack: 6276 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.467 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986212 kB' 'MemAvailable: 9496300 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489120 kB' 'Inactive: 1351728 kB' 'Active(anon): 126952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118076 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144340 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77632 kB' 'KernelStack: 6288 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.468 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.469 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986212 kB' 'MemAvailable: 9496300 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489104 kB' 'Inactive: 1351728 kB' 'Active(anon): 126936 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118084 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144340 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77632 kB' 'KernelStack: 6272 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.470 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.471 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:14.472 nr_hugepages=1024 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:14.472 resv_hugepages=0 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:14.472 surplus_hugepages=0 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:14.472 anon_hugepages=0 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986212 kB' 'MemAvailable: 9496300 kB' 'Buffers: 2436 kB' 'Cached: 1721924 kB' 'SwapCached: 0 kB' 'Active: 489084 kB' 'Inactive: 1351728 kB' 'Active(anon): 126916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118076 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 66708 kB' 'Slab: 144340 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77632 kB' 'KernelStack: 6272 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.472 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.473 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:14.474 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7986212 kB' 'MemUsed: 4255756 kB' 'SwapCached: 0 kB' 'Active: 488772 kB' 'Inactive: 1351728 kB' 'Active(anon): 126604 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1351728 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1724360 kB' 'Mapped: 47952 kB' 'AnonPages: 117964 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66708 kB' 'Slab: 144340 kB' 'SReclaimable: 66708 kB' 'SUnreclaim: 77632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.757 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.758 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:14.759 node0=1024 expecting 1024 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:14.759 00:07:14.759 real 0m1.306s 00:07:14.759 user 0m0.606s 00:07:14.759 sys 0m0.707s 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.759 18:32:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:14.759 ************************************ 00:07:14.759 END TEST no_shrink_alloc 00:07:14.759 ************************************ 00:07:14.759 18:32:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:14.759 18:32:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:14.759 00:07:14.759 real 0m5.451s 00:07:14.759 user 0m2.466s 00:07:14.759 sys 0m3.027s 00:07:14.759 ************************************ 00:07:14.759 END TEST hugepages 00:07:14.759 ************************************ 00:07:14.759 18:32:49 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.759 18:32:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:14.759 18:32:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:14.759 18:32:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:14.759 18:32:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.759 18:32:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.759 18:32:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:14.759 ************************************ 00:07:14.759 START TEST driver 00:07:14.759 ************************************ 00:07:14.759 18:32:49 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:14.759 * Looking for test storage... 00:07:14.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:14.759 18:32:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:14.759 18:32:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:14.759 18:32:49 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:15.695 18:32:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:15.695 18:32:49 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.695 18:32:49 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.695 18:32:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:15.695 ************************************ 00:07:15.695 START TEST guess_driver 00:07:15.695 ************************************ 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:07:15.695 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:15.695 Looking for driver=uio_pci_generic 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:15.695 18:32:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:16.262 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:07:16.262 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:07:16.262 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:16.521 18:32:50 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:17.090 00:07:17.090 real 0m1.693s 00:07:17.090 user 0m0.616s 00:07:17.090 sys 0m1.106s 00:07:17.090 ************************************ 00:07:17.090 END TEST guess_driver 00:07:17.090 ************************************ 00:07:17.090 18:32:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.090 18:32:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:17.362 18:32:51 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:07:17.362 ************************************ 00:07:17.362 END TEST driver 00:07:17.362 ************************************ 00:07:17.362 00:07:17.362 real 0m2.508s 00:07:17.362 user 0m0.897s 00:07:17.362 sys 0m1.716s 00:07:17.362 18:32:51 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.362 18:32:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:17.362 18:32:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:17.362 18:32:51 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:17.362 18:32:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.362 18:32:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.362 18:32:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:17.362 ************************************ 00:07:17.362 START TEST devices 00:07:17.362 ************************************ 00:07:17.362 18:32:51 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:17.362 * Looking for test storage... 00:07:17.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:17.362 18:32:51 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:17.362 18:32:51 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:17.362 18:32:51 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:17.362 18:32:51 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:18.296 18:32:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:18.296 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:07:18.297 No valid GPT data, bailing 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:07:18.297 No valid GPT data, bailing 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:07:18.297 No valid GPT data, bailing 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:07:18.297 18:32:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:18.297 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:07:18.297 No valid GPT data, bailing 00:07:18.297 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:18.555 18:32:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:18.555 18:32:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:18.555 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:07:18.555 18:32:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:07:18.555 18:32:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:07:18.555 18:32:52 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:07:18.555 18:32:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:07:18.555 18:32:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:18.555 18:32:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:07:18.555 18:32:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:07:18.555 18:32:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:18.555 18:32:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:18.555 18:32:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.555 18:32:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.555 18:32:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:18.555 ************************************ 00:07:18.555 START TEST nvme_mount 00:07:18.555 ************************************ 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:18.555 18:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:19.488 Creating new GPT entries in memory. 00:07:19.488 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:19.488 other utilities. 00:07:19.488 18:32:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:19.488 18:32:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:19.488 18:32:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:19.488 18:32:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:19.488 18:32:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:20.423 Creating new GPT entries in memory. 00:07:20.423 The operation has completed successfully. 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58919 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.423 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:20.681 18:32:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:20.681 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.681 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:20.681 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:20.681 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.681 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.681 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.938 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:21.256 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:21.256 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:21.256 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:21.256 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:21.256 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:21.256 18:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:21.524 18:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:21.782 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:22.039 18:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:22.297 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:22.556 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:22.556 ************************************ 00:07:22.556 END TEST nvme_mount 00:07:22.556 ************************************ 00:07:22.556 00:07:22.556 real 0m4.094s 00:07:22.556 user 0m0.733s 00:07:22.556 sys 0m1.141s 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.556 18:32:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:22.556 18:32:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:07:22.556 18:32:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:22.556 18:32:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.556 18:32:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.556 18:32:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:22.556 ************************************ 00:07:22.556 START TEST dm_mount 00:07:22.556 ************************************ 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:22.556 18:32:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:23.489 Creating new GPT entries in memory. 00:07:23.489 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:23.489 other utilities. 00:07:23.489 18:32:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:23.489 18:32:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:23.489 18:32:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:23.489 18:32:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:23.489 18:32:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:24.863 Creating new GPT entries in memory. 00:07:24.863 The operation has completed successfully. 00:07:24.863 18:32:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:24.863 18:32:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:24.863 18:32:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:24.863 18:32:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:24.863 18:32:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:25.796 The operation has completed successfully. 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59352 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:25.796 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.054 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:26.313 18:33:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:26.572 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.572 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:26.572 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:26.572 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.572 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.572 18:33:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.572 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.572 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:26.830 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:26.830 00:07:26.830 real 0m4.271s 00:07:26.830 user 0m0.507s 00:07:26.830 sys 0m0.739s 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.830 18:33:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:26.830 ************************************ 00:07:26.830 END TEST dm_mount 00:07:26.830 ************************************ 00:07:26.830 18:33:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:07:26.830 18:33:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:26.830 18:33:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:26.830 18:33:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:26.830 18:33:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:26.830 18:33:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:26.830 18:33:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:26.830 18:33:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:27.088 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:27.088 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:27.088 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:27.088 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:27.088 18:33:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:27.088 18:33:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:27.088 18:33:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:27.088 18:33:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:27.088 18:33:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:27.088 18:33:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:27.088 18:33:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:27.088 00:07:27.088 real 0m9.916s 00:07:27.088 user 0m1.888s 00:07:27.088 sys 0m2.508s 00:07:27.088 18:33:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.088 ************************************ 00:07:27.088 END TEST devices 00:07:27.088 ************************************ 00:07:27.088 18:33:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:27.346 18:33:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:27.346 00:07:27.346 real 0m23.480s 00:07:27.346 user 0m7.504s 00:07:27.346 sys 0m10.617s 00:07:27.346 18:33:01 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.346 18:33:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:27.346 ************************************ 00:07:27.346 END TEST setup.sh 00:07:27.346 ************************************ 00:07:27.346 18:33:01 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.346 18:33:01 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:27.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.910 Hugepages 00:07:27.910 node hugesize free / total 00:07:27.910 node0 1048576kB 0 / 0 00:07:27.910 node0 2048kB 2048 / 2048 00:07:27.910 00:07:27.910 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:28.167 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:28.167 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:28.167 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:07:28.167 18:33:02 -- spdk/autotest.sh@130 -- # uname -s 00:07:28.167 18:33:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:28.167 18:33:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:28.167 18:33:02 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:28.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.991 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:28.991 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:28.991 18:33:03 -- common/autotest_common.sh@1532 -- # sleep 1 00:07:30.402 18:33:04 -- common/autotest_common.sh@1533 -- # bdfs=() 00:07:30.402 18:33:04 -- common/autotest_common.sh@1533 -- # local bdfs 00:07:30.402 18:33:04 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:07:30.402 18:33:04 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:07:30.402 18:33:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:30.402 18:33:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:30.402 18:33:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:30.402 18:33:04 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.402 18:33:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:30.402 18:33:04 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:30.402 18:33:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:30.402 18:33:04 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:30.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.402 Waiting for block devices as requested 00:07:30.660 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.660 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.660 18:33:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:30.660 18:33:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:07:30.660 18:33:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:30.660 18:33:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:30.660 18:33:05 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:30.660 18:33:05 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1557 -- # continue 00:07:30.660 18:33:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:30.660 18:33:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:30.660 18:33:05 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:07:30.660 18:33:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:30.660 18:33:05 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:30.660 18:33:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:30.660 18:33:05 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:30.660 18:33:05 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:30.660 18:33:05 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:30.660 18:33:05 -- common/autotest_common.sh@1557 -- # continue 00:07:30.660 18:33:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:30.660 18:33:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.660 18:33:05 -- common/autotest_common.sh@10 -- # set +x 00:07:30.918 18:33:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:30.918 18:33:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.918 18:33:05 -- common/autotest_common.sh@10 -- # set +x 00:07:30.918 18:33:05 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:31.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.483 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.742 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.742 18:33:06 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:31.742 18:33:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.742 18:33:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.742 18:33:06 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:31.742 18:33:06 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:07:31.742 18:33:06 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:07:31.742 18:33:06 -- common/autotest_common.sh@1577 -- # bdfs=() 00:07:31.742 18:33:06 -- common/autotest_common.sh@1577 -- # local bdfs 00:07:31.742 18:33:06 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:07:31.742 18:33:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:31.742 18:33:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:31.742 18:33:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:31.742 18:33:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:31.742 18:33:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:31.742 18:33:06 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:31.742 18:33:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:31.742 18:33:06 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:31.742 18:33:06 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:31.742 18:33:06 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:31.742 18:33:06 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:31.742 18:33:06 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:31.742 18:33:06 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:31.742 18:33:06 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:31.742 18:33:06 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:31.742 18:33:06 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:07:31.742 18:33:06 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:07:31.742 18:33:06 -- common/autotest_common.sh@1593 -- # return 0 00:07:31.742 18:33:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:31.742 18:33:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:31.742 18:33:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:31.742 18:33:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:31.742 18:33:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:31.742 18:33:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.742 18:33:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.742 18:33:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:31.742 18:33:06 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:31.742 18:33:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.742 18:33:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.742 18:33:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.742 ************************************ 00:07:31.742 START TEST env 00:07:31.742 ************************************ 00:07:31.742 18:33:06 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:32.000 * Looking for test storage... 00:07:32.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:32.000 18:33:06 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:32.000 18:33:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.000 18:33:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.000 18:33:06 env -- common/autotest_common.sh@10 -- # set +x 00:07:32.000 ************************************ 00:07:32.000 START TEST env_memory 00:07:32.000 ************************************ 00:07:32.000 18:33:06 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:32.000 00:07:32.000 00:07:32.000 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.000 http://cunit.sourceforge.net/ 00:07:32.000 00:07:32.000 00:07:32.000 Suite: memory 00:07:32.000 Test: alloc and free memory map ...[2024-07-15 18:33:06.338288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:32.000 passed 00:07:32.000 Test: mem map translation ...[2024-07-15 18:33:06.366838] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:32.000 [2024-07-15 18:33:06.366940] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:32.000 [2024-07-15 18:33:06.367028] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:32.000 [2024-07-15 18:33:06.367047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:32.000 passed 00:07:32.000 Test: mem map registration ...[2024-07-15 18:33:06.421322] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:32.000 [2024-07-15 18:33:06.421377] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:32.000 passed 00:07:32.258 Test: mem map adjacent registrations ...passed 00:07:32.258 00:07:32.258 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.258 suites 1 1 n/a 0 0 00:07:32.258 tests 4 4 4 0 0 00:07:32.258 asserts 152 152 152 0 n/a 00:07:32.258 00:07:32.258 Elapsed time = 0.176 seconds 00:07:32.258 00:07:32.258 real 0m0.194s 00:07:32.258 user 0m0.173s 00:07:32.258 sys 0m0.017s 00:07:32.258 18:33:06 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.258 18:33:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:32.258 ************************************ 00:07:32.258 END TEST env_memory 00:07:32.258 ************************************ 00:07:32.258 18:33:06 env -- common/autotest_common.sh@1142 -- # return 0 00:07:32.258 18:33:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:32.258 18:33:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.258 18:33:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.258 18:33:06 env -- common/autotest_common.sh@10 -- # set +x 00:07:32.258 ************************************ 00:07:32.258 START TEST env_vtophys 00:07:32.258 ************************************ 00:07:32.258 18:33:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:32.258 EAL: lib.eal log level changed from notice to debug 00:07:32.258 EAL: Detected lcore 0 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 1 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 2 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 3 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 4 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 5 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 6 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 7 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 8 as core 0 on socket 0 00:07:32.258 EAL: Detected lcore 9 as core 0 on socket 0 00:07:32.258 EAL: Maximum logical cores by configuration: 128 00:07:32.258 EAL: Detected CPU lcores: 10 00:07:32.258 EAL: Detected NUMA nodes: 1 00:07:32.258 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:32.258 EAL: Detected shared linkage of DPDK 00:07:32.258 EAL: No shared files mode enabled, IPC will be disabled 00:07:32.258 EAL: Selected IOVA mode 'PA' 00:07:32.258 EAL: Probing VFIO support... 00:07:32.258 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:32.258 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:32.258 EAL: Ask a virtual area of 0x2e000 bytes 00:07:32.258 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:32.258 EAL: Setting up physically contiguous memory... 00:07:32.258 EAL: Setting maximum number of open files to 524288 00:07:32.258 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:32.258 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:32.258 EAL: Ask a virtual area of 0x61000 bytes 00:07:32.258 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:32.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:32.258 EAL: Ask a virtual area of 0x400000000 bytes 00:07:32.258 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:32.258 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:32.258 EAL: Ask a virtual area of 0x61000 bytes 00:07:32.258 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:32.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:32.258 EAL: Ask a virtual area of 0x400000000 bytes 00:07:32.258 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:32.258 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:32.258 EAL: Ask a virtual area of 0x61000 bytes 00:07:32.258 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:32.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:32.258 EAL: Ask a virtual area of 0x400000000 bytes 00:07:32.258 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:32.258 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:32.258 EAL: Ask a virtual area of 0x61000 bytes 00:07:32.258 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:32.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:32.258 EAL: Ask a virtual area of 0x400000000 bytes 00:07:32.258 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:32.258 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:32.258 EAL: Hugepages will be freed exactly as allocated. 00:07:32.258 EAL: No shared files mode enabled, IPC is disabled 00:07:32.258 EAL: No shared files mode enabled, IPC is disabled 00:07:32.258 EAL: TSC frequency is ~2100000 KHz 00:07:32.258 EAL: Main lcore 0 is ready (tid=7f909cc72a00;cpuset=[0]) 00:07:32.258 EAL: Trying to obtain current memory policy. 00:07:32.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.258 EAL: Restoring previous memory policy: 0 00:07:32.258 EAL: request: mp_malloc_sync 00:07:32.258 EAL: No shared files mode enabled, IPC is disabled 00:07:32.258 EAL: Heap on socket 0 was expanded by 2MB 00:07:32.258 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:32.258 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:32.258 EAL: Mem event callback 'spdk:(nil)' registered 00:07:32.258 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:32.258 00:07:32.258 00:07:32.258 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.258 http://cunit.sourceforge.net/ 00:07:32.258 00:07:32.258 00:07:32.258 Suite: components_suite 00:07:32.258 Test: vtophys_malloc_test ...passed 00:07:32.258 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:32.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.258 EAL: Restoring previous memory policy: 4 00:07:32.258 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.258 EAL: request: mp_malloc_sync 00:07:32.258 EAL: No shared files mode enabled, IPC is disabled 00:07:32.258 EAL: Heap on socket 0 was expanded by 4MB 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was shrunk by 4MB 00:07:32.259 EAL: Trying to obtain current memory policy. 00:07:32.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.259 EAL: Restoring previous memory policy: 4 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was expanded by 6MB 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was shrunk by 6MB 00:07:32.259 EAL: Trying to obtain current memory policy. 00:07:32.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.259 EAL: Restoring previous memory policy: 4 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was expanded by 10MB 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was shrunk by 10MB 00:07:32.259 EAL: Trying to obtain current memory policy. 00:07:32.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.259 EAL: Restoring previous memory policy: 4 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was expanded by 18MB 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was shrunk by 18MB 00:07:32.259 EAL: Trying to obtain current memory policy. 00:07:32.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.259 EAL: Restoring previous memory policy: 4 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.259 EAL: request: mp_malloc_sync 00:07:32.259 EAL: No shared files mode enabled, IPC is disabled 00:07:32.259 EAL: Heap on socket 0 was expanded by 34MB 00:07:32.259 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.516 EAL: request: mp_malloc_sync 00:07:32.517 EAL: No shared files mode enabled, IPC is disabled 00:07:32.517 EAL: Heap on socket 0 was shrunk by 34MB 00:07:32.517 EAL: Trying to obtain current memory policy. 00:07:32.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.517 EAL: Restoring previous memory policy: 4 00:07:32.517 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.517 EAL: request: mp_malloc_sync 00:07:32.517 EAL: No shared files mode enabled, IPC is disabled 00:07:32.517 EAL: Heap on socket 0 was expanded by 66MB 00:07:32.517 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.517 EAL: request: mp_malloc_sync 00:07:32.517 EAL: No shared files mode enabled, IPC is disabled 00:07:32.517 EAL: Heap on socket 0 was shrunk by 66MB 00:07:32.517 EAL: Trying to obtain current memory policy. 00:07:32.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.517 EAL: Restoring previous memory policy: 4 00:07:32.517 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.517 EAL: request: mp_malloc_sync 00:07:32.517 EAL: No shared files mode enabled, IPC is disabled 00:07:32.517 EAL: Heap on socket 0 was expanded by 130MB 00:07:32.517 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.517 EAL: request: mp_malloc_sync 00:07:32.517 EAL: No shared files mode enabled, IPC is disabled 00:07:32.517 EAL: Heap on socket 0 was shrunk by 130MB 00:07:32.517 EAL: Trying to obtain current memory policy. 00:07:32.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.775 EAL: Restoring previous memory policy: 4 00:07:32.775 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.775 EAL: request: mp_malloc_sync 00:07:32.775 EAL: No shared files mode enabled, IPC is disabled 00:07:32.775 EAL: Heap on socket 0 was expanded by 258MB 00:07:32.775 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.775 EAL: request: mp_malloc_sync 00:07:32.775 EAL: No shared files mode enabled, IPC is disabled 00:07:32.775 EAL: Heap on socket 0 was shrunk by 258MB 00:07:32.775 EAL: Trying to obtain current memory policy. 00:07:32.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:33.032 EAL: Restoring previous memory policy: 4 00:07:33.032 EAL: Calling mem event callback 'spdk:(nil)' 00:07:33.032 EAL: request: mp_malloc_sync 00:07:33.033 EAL: No shared files mode enabled, IPC is disabled 00:07:33.033 EAL: Heap on socket 0 was expanded by 514MB 00:07:33.318 EAL: Calling mem event callback 'spdk:(nil)' 00:07:33.318 EAL: request: mp_malloc_sync 00:07:33.318 EAL: No shared files mode enabled, IPC is disabled 00:07:33.318 EAL: Heap on socket 0 was shrunk by 514MB 00:07:33.318 EAL: Trying to obtain current memory policy. 00:07:33.318 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:33.883 EAL: Restoring previous memory policy: 4 00:07:33.883 EAL: Calling mem event callback 'spdk:(nil)' 00:07:33.883 EAL: request: mp_malloc_sync 00:07:33.883 EAL: No shared files mode enabled, IPC is disabled 00:07:33.883 EAL: Heap on socket 0 was expanded by 1026MB 00:07:34.140 EAL: Calling mem event callback 'spdk:(nil)' 00:07:34.140 passed 00:07:34.140 00:07:34.140 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.140 suites 1 1 n/a 0 0 00:07:34.140 tests 2 2 2 0 0 00:07:34.140 asserts 5232 5232 5232 0 n/a 00:07:34.140 00:07:34.140 Elapsed time = 1.841 seconds 00:07:34.140 EAL: request: mp_malloc_sync 00:07:34.140 EAL: No shared files mode enabled, IPC is disabled 00:07:34.140 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:34.140 EAL: Calling mem event callback 'spdk:(nil)' 00:07:34.140 EAL: request: mp_malloc_sync 00:07:34.140 EAL: No shared files mode enabled, IPC is disabled 00:07:34.140 EAL: Heap on socket 0 was shrunk by 2MB 00:07:34.140 EAL: No shared files mode enabled, IPC is disabled 00:07:34.140 EAL: No shared files mode enabled, IPC is disabled 00:07:34.140 EAL: No shared files mode enabled, IPC is disabled 00:07:34.140 00:07:34.140 real 0m2.033s 00:07:34.140 user 0m1.077s 00:07:34.140 sys 0m0.819s 00:07:34.140 18:33:08 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.140 18:33:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:34.140 ************************************ 00:07:34.140 END TEST env_vtophys 00:07:34.140 ************************************ 00:07:34.140 18:33:08 env -- common/autotest_common.sh@1142 -- # return 0 00:07:34.140 18:33:08 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:34.140 18:33:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.140 18:33:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.140 18:33:08 env -- common/autotest_common.sh@10 -- # set +x 00:07:34.398 ************************************ 00:07:34.398 START TEST env_pci 00:07:34.398 ************************************ 00:07:34.398 18:33:08 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:34.398 00:07:34.398 00:07:34.398 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.398 http://cunit.sourceforge.net/ 00:07:34.398 00:07:34.398 00:07:34.398 Suite: pci 00:07:34.398 Test: pci_hook ...[2024-07-15 18:33:08.643243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60551 has claimed it 00:07:34.398 passed 00:07:34.398 00:07:34.398 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.398 suites 1 1 n/a 0 0 00:07:34.398 tests 1 1 1 0 0 00:07:34.398 asserts 25 25 25 0 n/a 00:07:34.398 00:07:34.398 Elapsed time = 0.003 seconds 00:07:34.398 EAL: Cannot find device (10000:00:01.0) 00:07:34.398 EAL: Failed to attach device on primary process 00:07:34.398 00:07:34.398 real 0m0.022s 00:07:34.398 user 0m0.008s 00:07:34.398 sys 0m0.014s 00:07:34.398 18:33:08 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.398 18:33:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:34.398 ************************************ 00:07:34.398 END TEST env_pci 00:07:34.398 ************************************ 00:07:34.398 18:33:08 env -- common/autotest_common.sh@1142 -- # return 0 00:07:34.398 18:33:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:34.398 18:33:08 env -- env/env.sh@15 -- # uname 00:07:34.398 18:33:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:34.398 18:33:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:34.398 18:33:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:34.398 18:33:08 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:34.398 18:33:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.398 18:33:08 env -- common/autotest_common.sh@10 -- # set +x 00:07:34.398 ************************************ 00:07:34.398 START TEST env_dpdk_post_init 00:07:34.398 ************************************ 00:07:34.398 18:33:08 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:34.398 EAL: Detected CPU lcores: 10 00:07:34.398 EAL: Detected NUMA nodes: 1 00:07:34.398 EAL: Detected shared linkage of DPDK 00:07:34.398 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:34.398 EAL: Selected IOVA mode 'PA' 00:07:34.398 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:34.656 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:34.657 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:34.657 Starting DPDK initialization... 00:07:34.657 Starting SPDK post initialization... 00:07:34.657 SPDK NVMe probe 00:07:34.657 Attaching to 0000:00:10.0 00:07:34.657 Attaching to 0000:00:11.0 00:07:34.657 Attached to 0000:00:10.0 00:07:34.657 Attached to 0000:00:11.0 00:07:34.657 Cleaning up... 00:07:34.657 00:07:34.657 real 0m0.182s 00:07:34.657 user 0m0.042s 00:07:34.657 sys 0m0.040s 00:07:34.657 18:33:08 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.657 18:33:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:34.657 ************************************ 00:07:34.657 END TEST env_dpdk_post_init 00:07:34.657 ************************************ 00:07:34.657 18:33:08 env -- common/autotest_common.sh@1142 -- # return 0 00:07:34.657 18:33:08 env -- env/env.sh@26 -- # uname 00:07:34.657 18:33:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:34.657 18:33:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:34.657 18:33:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.657 18:33:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.657 18:33:08 env -- common/autotest_common.sh@10 -- # set +x 00:07:34.657 ************************************ 00:07:34.657 START TEST env_mem_callbacks 00:07:34.657 ************************************ 00:07:34.657 18:33:08 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:34.657 EAL: Detected CPU lcores: 10 00:07:34.657 EAL: Detected NUMA nodes: 1 00:07:34.657 EAL: Detected shared linkage of DPDK 00:07:34.657 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:34.657 EAL: Selected IOVA mode 'PA' 00:07:34.657 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:34.657 00:07:34.657 00:07:34.657 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.657 http://cunit.sourceforge.net/ 00:07:34.657 00:07:34.657 00:07:34.657 Suite: memory 00:07:34.657 Test: test ... 00:07:34.657 register 0x200000200000 2097152 00:07:34.657 malloc 3145728 00:07:34.657 register 0x200000400000 4194304 00:07:34.657 buf 0x200000500000 len 3145728 PASSED 00:07:34.657 malloc 64 00:07:34.657 buf 0x2000004fff40 len 64 PASSED 00:07:34.657 malloc 4194304 00:07:34.657 register 0x200000800000 6291456 00:07:34.657 buf 0x200000a00000 len 4194304 PASSED 00:07:34.657 free 0x200000500000 3145728 00:07:34.657 free 0x2000004fff40 64 00:07:34.657 unregister 0x200000400000 4194304 PASSED 00:07:34.657 free 0x200000a00000 4194304 00:07:34.657 unregister 0x200000800000 6291456 PASSED 00:07:34.657 malloc 8388608 00:07:34.657 register 0x200000400000 10485760 00:07:34.657 buf 0x200000600000 len 8388608 PASSED 00:07:34.657 free 0x200000600000 8388608 00:07:34.657 unregister 0x200000400000 10485760 PASSED 00:07:34.657 passed 00:07:34.657 00:07:34.657 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.657 suites 1 1 n/a 0 0 00:07:34.657 tests 1 1 1 0 0 00:07:34.657 asserts 15 15 15 0 n/a 00:07:34.657 00:07:34.657 Elapsed time = 0.007 seconds 00:07:34.657 00:07:34.657 real 0m0.145s 00:07:34.657 user 0m0.019s 00:07:34.657 sys 0m0.024s 00:07:34.657 18:33:09 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.657 18:33:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:34.657 ************************************ 00:07:34.657 END TEST env_mem_callbacks 00:07:34.657 ************************************ 00:07:34.915 18:33:09 env -- common/autotest_common.sh@1142 -- # return 0 00:07:34.915 00:07:34.915 real 0m2.958s 00:07:34.915 user 0m1.447s 00:07:34.915 sys 0m1.172s 00:07:34.915 18:33:09 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.915 18:33:09 env -- common/autotest_common.sh@10 -- # set +x 00:07:34.915 ************************************ 00:07:34.915 END TEST env 00:07:34.915 ************************************ 00:07:34.915 18:33:09 -- common/autotest_common.sh@1142 -- # return 0 00:07:34.915 18:33:09 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:34.915 18:33:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.915 18:33:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.915 18:33:09 -- common/autotest_common.sh@10 -- # set +x 00:07:34.915 ************************************ 00:07:34.915 START TEST rpc 00:07:34.915 ************************************ 00:07:34.915 18:33:09 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:34.915 * Looking for test storage... 00:07:34.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:34.915 18:33:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60655 00:07:34.915 18:33:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.915 18:33:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60655 00:07:34.915 18:33:09 rpc -- common/autotest_common.sh@829 -- # '[' -z 60655 ']' 00:07:34.915 18:33:09 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.915 18:33:09 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.915 18:33:09 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.915 18:33:09 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.915 18:33:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.915 18:33:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:34.915 [2024-07-15 18:33:09.375675] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:34.915 [2024-07-15 18:33:09.375765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60655 ] 00:07:35.173 [2024-07-15 18:33:09.514477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.431 [2024-07-15 18:33:09.689267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:35.431 [2024-07-15 18:33:09.689349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60655' to capture a snapshot of events at runtime. 00:07:35.431 [2024-07-15 18:33:09.689365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.431 [2024-07-15 18:33:09.689378] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.432 [2024-07-15 18:33:09.689389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60655 for offline analysis/debug. 00:07:35.432 [2024-07-15 18:33:09.689440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.997 18:33:10 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.997 18:33:10 rpc -- common/autotest_common.sh@862 -- # return 0 00:07:35.997 18:33:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:35.997 18:33:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:35.997 18:33:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:35.997 18:33:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:35.997 18:33:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.997 18:33:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.997 18:33:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.997 ************************************ 00:07:35.997 START TEST rpc_integrity 00:07:35.997 ************************************ 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.997 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:35.997 { 00:07:35.997 "aliases": [ 00:07:35.997 "0ae2d1c0-66a0-48b5-8b1e-079de83935b9" 00:07:35.997 ], 00:07:35.997 "assigned_rate_limits": { 00:07:35.997 "r_mbytes_per_sec": 0, 00:07:35.997 "rw_ios_per_sec": 0, 00:07:35.997 "rw_mbytes_per_sec": 0, 00:07:35.997 "w_mbytes_per_sec": 0 00:07:35.997 }, 00:07:35.997 "block_size": 512, 00:07:35.997 "claimed": false, 00:07:35.997 "driver_specific": {}, 00:07:35.997 "memory_domains": [ 00:07:35.997 { 00:07:35.997 "dma_device_id": "system", 00:07:35.997 "dma_device_type": 1 00:07:35.997 }, 00:07:35.997 { 00:07:35.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.997 "dma_device_type": 2 00:07:35.997 } 00:07:35.997 ], 00:07:35.997 "name": "Malloc0", 00:07:35.997 "num_blocks": 16384, 00:07:35.997 "product_name": "Malloc disk", 00:07:35.997 "supported_io_types": { 00:07:35.997 "abort": true, 00:07:35.997 "compare": false, 00:07:35.997 "compare_and_write": false, 00:07:35.997 "copy": true, 00:07:35.997 "flush": true, 00:07:35.997 "get_zone_info": false, 00:07:35.997 "nvme_admin": false, 00:07:35.997 "nvme_io": false, 00:07:35.997 "nvme_io_md": false, 00:07:35.997 "nvme_iov_md": false, 00:07:35.997 "read": true, 00:07:35.997 "reset": true, 00:07:35.997 "seek_data": false, 00:07:35.997 "seek_hole": false, 00:07:35.997 "unmap": true, 00:07:35.997 "write": true, 00:07:35.997 "write_zeroes": true, 00:07:35.997 "zcopy": true, 00:07:35.997 "zone_append": false, 00:07:35.997 "zone_management": false 00:07:35.997 }, 00:07:35.997 "uuid": "0ae2d1c0-66a0-48b5-8b1e-079de83935b9", 00:07:35.997 "zoned": false 00:07:35.997 } 00:07:35.997 ]' 00:07:35.997 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:36.254 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:36.254 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:36.254 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.254 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.254 [2024-07-15 18:33:10.523260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:36.254 [2024-07-15 18:33:10.523324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.254 [2024-07-15 18:33:10.523347] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ecad0 00:07:36.254 [2024-07-15 18:33:10.523358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.254 [2024-07-15 18:33:10.525349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.254 [2024-07-15 18:33:10.525388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:36.254 Passthru0 00:07:36.254 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.254 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:36.254 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.254 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.254 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.254 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:36.254 { 00:07:36.254 "aliases": [ 00:07:36.254 "0ae2d1c0-66a0-48b5-8b1e-079de83935b9" 00:07:36.254 ], 00:07:36.254 "assigned_rate_limits": { 00:07:36.254 "r_mbytes_per_sec": 0, 00:07:36.254 "rw_ios_per_sec": 0, 00:07:36.254 "rw_mbytes_per_sec": 0, 00:07:36.254 "w_mbytes_per_sec": 0 00:07:36.254 }, 00:07:36.254 "block_size": 512, 00:07:36.254 "claim_type": "exclusive_write", 00:07:36.254 "claimed": true, 00:07:36.254 "driver_specific": {}, 00:07:36.254 "memory_domains": [ 00:07:36.254 { 00:07:36.254 "dma_device_id": "system", 00:07:36.254 "dma_device_type": 1 00:07:36.254 }, 00:07:36.254 { 00:07:36.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.254 "dma_device_type": 2 00:07:36.254 } 00:07:36.254 ], 00:07:36.254 "name": "Malloc0", 00:07:36.254 "num_blocks": 16384, 00:07:36.255 "product_name": "Malloc disk", 00:07:36.255 "supported_io_types": { 00:07:36.255 "abort": true, 00:07:36.255 "compare": false, 00:07:36.255 "compare_and_write": false, 00:07:36.255 "copy": true, 00:07:36.255 "flush": true, 00:07:36.255 "get_zone_info": false, 00:07:36.255 "nvme_admin": false, 00:07:36.255 "nvme_io": false, 00:07:36.255 "nvme_io_md": false, 00:07:36.255 "nvme_iov_md": false, 00:07:36.255 "read": true, 00:07:36.255 "reset": true, 00:07:36.255 "seek_data": false, 00:07:36.255 "seek_hole": false, 00:07:36.255 "unmap": true, 00:07:36.255 "write": true, 00:07:36.255 "write_zeroes": true, 00:07:36.255 "zcopy": true, 00:07:36.255 "zone_append": false, 00:07:36.255 "zone_management": false 00:07:36.255 }, 00:07:36.255 "uuid": "0ae2d1c0-66a0-48b5-8b1e-079de83935b9", 00:07:36.255 "zoned": false 00:07:36.255 }, 00:07:36.255 { 00:07:36.255 "aliases": [ 00:07:36.255 "cd633500-d772-5ed2-bdc7-bc21ecb57fc2" 00:07:36.255 ], 00:07:36.255 "assigned_rate_limits": { 00:07:36.255 "r_mbytes_per_sec": 0, 00:07:36.255 "rw_ios_per_sec": 0, 00:07:36.255 "rw_mbytes_per_sec": 0, 00:07:36.255 "w_mbytes_per_sec": 0 00:07:36.255 }, 00:07:36.255 "block_size": 512, 00:07:36.255 "claimed": false, 00:07:36.255 "driver_specific": { 00:07:36.255 "passthru": { 00:07:36.255 "base_bdev_name": "Malloc0", 00:07:36.255 "name": "Passthru0" 00:07:36.255 } 00:07:36.255 }, 00:07:36.255 "memory_domains": [ 00:07:36.255 { 00:07:36.255 "dma_device_id": "system", 00:07:36.255 "dma_device_type": 1 00:07:36.255 }, 00:07:36.255 { 00:07:36.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.255 "dma_device_type": 2 00:07:36.255 } 00:07:36.255 ], 00:07:36.255 "name": "Passthru0", 00:07:36.255 "num_blocks": 16384, 00:07:36.255 "product_name": "passthru", 00:07:36.255 "supported_io_types": { 00:07:36.255 "abort": true, 00:07:36.255 "compare": false, 00:07:36.255 "compare_and_write": false, 00:07:36.255 "copy": true, 00:07:36.255 "flush": true, 00:07:36.255 "get_zone_info": false, 00:07:36.255 "nvme_admin": false, 00:07:36.255 "nvme_io": false, 00:07:36.255 "nvme_io_md": false, 00:07:36.255 "nvme_iov_md": false, 00:07:36.255 "read": true, 00:07:36.255 "reset": true, 00:07:36.255 "seek_data": false, 00:07:36.255 "seek_hole": false, 00:07:36.255 "unmap": true, 00:07:36.255 "write": true, 00:07:36.255 "write_zeroes": true, 00:07:36.255 "zcopy": true, 00:07:36.255 "zone_append": false, 00:07:36.255 "zone_management": false 00:07:36.255 }, 00:07:36.255 "uuid": "cd633500-d772-5ed2-bdc7-bc21ecb57fc2", 00:07:36.255 "zoned": false 00:07:36.255 } 00:07:36.255 ]' 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:36.255 18:33:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:36.255 00:07:36.255 real 0m0.305s 00:07:36.255 user 0m0.170s 00:07:36.255 sys 0m0.058s 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.255 18:33:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 ************************************ 00:07:36.255 END TEST rpc_integrity 00:07:36.255 ************************************ 00:07:36.255 18:33:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:36.255 18:33:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:36.255 18:33:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.255 18:33:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.255 18:33:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.512 ************************************ 00:07:36.513 START TEST rpc_plugins 00:07:36.513 ************************************ 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:36.513 { 00:07:36.513 "aliases": [ 00:07:36.513 "e6d126d9-93d5-4c08-8429-28ab48303247" 00:07:36.513 ], 00:07:36.513 "assigned_rate_limits": { 00:07:36.513 "r_mbytes_per_sec": 0, 00:07:36.513 "rw_ios_per_sec": 0, 00:07:36.513 "rw_mbytes_per_sec": 0, 00:07:36.513 "w_mbytes_per_sec": 0 00:07:36.513 }, 00:07:36.513 "block_size": 4096, 00:07:36.513 "claimed": false, 00:07:36.513 "driver_specific": {}, 00:07:36.513 "memory_domains": [ 00:07:36.513 { 00:07:36.513 "dma_device_id": "system", 00:07:36.513 "dma_device_type": 1 00:07:36.513 }, 00:07:36.513 { 00:07:36.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.513 "dma_device_type": 2 00:07:36.513 } 00:07:36.513 ], 00:07:36.513 "name": "Malloc1", 00:07:36.513 "num_blocks": 256, 00:07:36.513 "product_name": "Malloc disk", 00:07:36.513 "supported_io_types": { 00:07:36.513 "abort": true, 00:07:36.513 "compare": false, 00:07:36.513 "compare_and_write": false, 00:07:36.513 "copy": true, 00:07:36.513 "flush": true, 00:07:36.513 "get_zone_info": false, 00:07:36.513 "nvme_admin": false, 00:07:36.513 "nvme_io": false, 00:07:36.513 "nvme_io_md": false, 00:07:36.513 "nvme_iov_md": false, 00:07:36.513 "read": true, 00:07:36.513 "reset": true, 00:07:36.513 "seek_data": false, 00:07:36.513 "seek_hole": false, 00:07:36.513 "unmap": true, 00:07:36.513 "write": true, 00:07:36.513 "write_zeroes": true, 00:07:36.513 "zcopy": true, 00:07:36.513 "zone_append": false, 00:07:36.513 "zone_management": false 00:07:36.513 }, 00:07:36.513 "uuid": "e6d126d9-93d5-4c08-8429-28ab48303247", 00:07:36.513 "zoned": false 00:07:36.513 } 00:07:36.513 ]' 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:36.513 18:33:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:36.513 00:07:36.513 real 0m0.154s 00:07:36.513 user 0m0.092s 00:07:36.513 sys 0m0.026s 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.513 18:33:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 ************************************ 00:07:36.513 END TEST rpc_plugins 00:07:36.513 ************************************ 00:07:36.513 18:33:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:36.513 18:33:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:36.513 18:33:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.513 18:33:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.513 18:33:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 ************************************ 00:07:36.513 START TEST rpc_trace_cmd_test 00:07:36.513 ************************************ 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:36.513 "bdev": { 00:07:36.513 "mask": "0x8", 00:07:36.513 "tpoint_mask": "0xffffffffffffffff" 00:07:36.513 }, 00:07:36.513 "bdev_nvme": { 00:07:36.513 "mask": "0x4000", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "blobfs": { 00:07:36.513 "mask": "0x80", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "dsa": { 00:07:36.513 "mask": "0x200", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "ftl": { 00:07:36.513 "mask": "0x40", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "iaa": { 00:07:36.513 "mask": "0x1000", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "iscsi_conn": { 00:07:36.513 "mask": "0x2", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "nvme_pcie": { 00:07:36.513 "mask": "0x800", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "nvme_tcp": { 00:07:36.513 "mask": "0x2000", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "nvmf_rdma": { 00:07:36.513 "mask": "0x10", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "nvmf_tcp": { 00:07:36.513 "mask": "0x20", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "scsi": { 00:07:36.513 "mask": "0x4", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "sock": { 00:07:36.513 "mask": "0x8000", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "thread": { 00:07:36.513 "mask": "0x400", 00:07:36.513 "tpoint_mask": "0x0" 00:07:36.513 }, 00:07:36.513 "tpoint_group_mask": "0x8", 00:07:36.513 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60655" 00:07:36.513 }' 00:07:36.513 18:33:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:36.773 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:36.773 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:36.774 00:07:36.774 real 0m0.255s 00:07:36.774 user 0m0.202s 00:07:36.774 sys 0m0.043s 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.774 18:33:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.774 ************************************ 00:07:36.774 END TEST rpc_trace_cmd_test 00:07:36.774 ************************************ 00:07:36.774 18:33:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:36.774 18:33:11 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:07:36.774 18:33:11 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:07:36.774 18:33:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.774 18:33:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.774 18:33:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.030 ************************************ 00:07:37.030 START TEST go_rpc 00:07:37.030 ************************************ 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["c6e908e0-526a-4bd9-9e23-bd574c2803db"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"c6e908e0-526a-4bd9-9e23-bd574c2803db","zoned":false}]' 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:07:37.030 18:33:11 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:07:37.030 00:07:37.030 real 0m0.213s 00:07:37.030 user 0m0.127s 00:07:37.030 sys 0m0.046s 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.030 ************************************ 00:07:37.030 END TEST go_rpc 00:07:37.030 ************************************ 00:07:37.030 18:33:11 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 18:33:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:37.287 18:33:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:37.287 18:33:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:37.287 18:33:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.287 18:33:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.287 18:33:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 ************************************ 00:07:37.287 START TEST rpc_daemon_integrity 00:07:37.287 ************************************ 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.287 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:37.287 { 00:07:37.287 "aliases": [ 00:07:37.288 "e9ffb338-3f45-4fbc-903f-a8de6d3f9d33" 00:07:37.288 ], 00:07:37.288 "assigned_rate_limits": { 00:07:37.288 "r_mbytes_per_sec": 0, 00:07:37.288 "rw_ios_per_sec": 0, 00:07:37.288 "rw_mbytes_per_sec": 0, 00:07:37.288 "w_mbytes_per_sec": 0 00:07:37.288 }, 00:07:37.288 "block_size": 512, 00:07:37.288 "claimed": false, 00:07:37.288 "driver_specific": {}, 00:07:37.288 "memory_domains": [ 00:07:37.288 { 00:07:37.288 "dma_device_id": "system", 00:07:37.288 "dma_device_type": 1 00:07:37.288 }, 00:07:37.288 { 00:07:37.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.288 "dma_device_type": 2 00:07:37.288 } 00:07:37.288 ], 00:07:37.288 "name": "Malloc3", 00:07:37.288 "num_blocks": 16384, 00:07:37.288 "product_name": "Malloc disk", 00:07:37.288 "supported_io_types": { 00:07:37.288 "abort": true, 00:07:37.288 "compare": false, 00:07:37.288 "compare_and_write": false, 00:07:37.288 "copy": true, 00:07:37.288 "flush": true, 00:07:37.288 "get_zone_info": false, 00:07:37.288 "nvme_admin": false, 00:07:37.288 "nvme_io": false, 00:07:37.288 "nvme_io_md": false, 00:07:37.288 "nvme_iov_md": false, 00:07:37.288 "read": true, 00:07:37.288 "reset": true, 00:07:37.288 "seek_data": false, 00:07:37.288 "seek_hole": false, 00:07:37.288 "unmap": true, 00:07:37.288 "write": true, 00:07:37.288 "write_zeroes": true, 00:07:37.288 "zcopy": true, 00:07:37.288 "zone_append": false, 00:07:37.288 "zone_management": false 00:07:37.288 }, 00:07:37.288 "uuid": "e9ffb338-3f45-4fbc-903f-a8de6d3f9d33", 00:07:37.288 "zoned": false 00:07:37.288 } 00:07:37.288 ]' 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.288 [2024-07-15 18:33:11.660454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:37.288 [2024-07-15 18:33:11.660525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.288 [2024-07-15 18:33:11.660550] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24e3bd0 00:07:37.288 [2024-07-15 18:33:11.660560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.288 [2024-07-15 18:33:11.662353] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.288 [2024-07-15 18:33:11.662390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:37.288 Passthru0 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:37.288 { 00:07:37.288 "aliases": [ 00:07:37.288 "e9ffb338-3f45-4fbc-903f-a8de6d3f9d33" 00:07:37.288 ], 00:07:37.288 "assigned_rate_limits": { 00:07:37.288 "r_mbytes_per_sec": 0, 00:07:37.288 "rw_ios_per_sec": 0, 00:07:37.288 "rw_mbytes_per_sec": 0, 00:07:37.288 "w_mbytes_per_sec": 0 00:07:37.288 }, 00:07:37.288 "block_size": 512, 00:07:37.288 "claim_type": "exclusive_write", 00:07:37.288 "claimed": true, 00:07:37.288 "driver_specific": {}, 00:07:37.288 "memory_domains": [ 00:07:37.288 { 00:07:37.288 "dma_device_id": "system", 00:07:37.288 "dma_device_type": 1 00:07:37.288 }, 00:07:37.288 { 00:07:37.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.288 "dma_device_type": 2 00:07:37.288 } 00:07:37.288 ], 00:07:37.288 "name": "Malloc3", 00:07:37.288 "num_blocks": 16384, 00:07:37.288 "product_name": "Malloc disk", 00:07:37.288 "supported_io_types": { 00:07:37.288 "abort": true, 00:07:37.288 "compare": false, 00:07:37.288 "compare_and_write": false, 00:07:37.288 "copy": true, 00:07:37.288 "flush": true, 00:07:37.288 "get_zone_info": false, 00:07:37.288 "nvme_admin": false, 00:07:37.288 "nvme_io": false, 00:07:37.288 "nvme_io_md": false, 00:07:37.288 "nvme_iov_md": false, 00:07:37.288 "read": true, 00:07:37.288 "reset": true, 00:07:37.288 "seek_data": false, 00:07:37.288 "seek_hole": false, 00:07:37.288 "unmap": true, 00:07:37.288 "write": true, 00:07:37.288 "write_zeroes": true, 00:07:37.288 "zcopy": true, 00:07:37.288 "zone_append": false, 00:07:37.288 "zone_management": false 00:07:37.288 }, 00:07:37.288 "uuid": "e9ffb338-3f45-4fbc-903f-a8de6d3f9d33", 00:07:37.288 "zoned": false 00:07:37.288 }, 00:07:37.288 { 00:07:37.288 "aliases": [ 00:07:37.288 "306f1f5d-fe33-550f-b7a3-cbe14e0bcc53" 00:07:37.288 ], 00:07:37.288 "assigned_rate_limits": { 00:07:37.288 "r_mbytes_per_sec": 0, 00:07:37.288 "rw_ios_per_sec": 0, 00:07:37.288 "rw_mbytes_per_sec": 0, 00:07:37.288 "w_mbytes_per_sec": 0 00:07:37.288 }, 00:07:37.288 "block_size": 512, 00:07:37.288 "claimed": false, 00:07:37.288 "driver_specific": { 00:07:37.288 "passthru": { 00:07:37.288 "base_bdev_name": "Malloc3", 00:07:37.288 "name": "Passthru0" 00:07:37.288 } 00:07:37.288 }, 00:07:37.288 "memory_domains": [ 00:07:37.288 { 00:07:37.288 "dma_device_id": "system", 00:07:37.288 "dma_device_type": 1 00:07:37.288 }, 00:07:37.288 { 00:07:37.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.288 "dma_device_type": 2 00:07:37.288 } 00:07:37.288 ], 00:07:37.288 "name": "Passthru0", 00:07:37.288 "num_blocks": 16384, 00:07:37.288 "product_name": "passthru", 00:07:37.288 "supported_io_types": { 00:07:37.288 "abort": true, 00:07:37.288 "compare": false, 00:07:37.288 "compare_and_write": false, 00:07:37.288 "copy": true, 00:07:37.288 "flush": true, 00:07:37.288 "get_zone_info": false, 00:07:37.288 "nvme_admin": false, 00:07:37.288 "nvme_io": false, 00:07:37.288 "nvme_io_md": false, 00:07:37.288 "nvme_iov_md": false, 00:07:37.288 "read": true, 00:07:37.288 "reset": true, 00:07:37.288 "seek_data": false, 00:07:37.288 "seek_hole": false, 00:07:37.288 "unmap": true, 00:07:37.288 "write": true, 00:07:37.288 "write_zeroes": true, 00:07:37.288 "zcopy": true, 00:07:37.288 "zone_append": false, 00:07:37.288 "zone_management": false 00:07:37.288 }, 00:07:37.288 "uuid": "306f1f5d-fe33-550f-b7a3-cbe14e0bcc53", 00:07:37.288 "zoned": false 00:07:37.288 } 00:07:37.288 ]' 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.288 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:37.545 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:37.545 18:33:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:37.545 00:07:37.545 real 0m0.277s 00:07:37.545 user 0m0.159s 00:07:37.545 sys 0m0.042s 00:07:37.545 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.545 18:33:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.545 ************************************ 00:07:37.545 END TEST rpc_daemon_integrity 00:07:37.545 ************************************ 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:37.545 18:33:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:37.545 18:33:11 rpc -- rpc/rpc.sh@84 -- # killprocess 60655 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@948 -- # '[' -z 60655 ']' 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@952 -- # kill -0 60655 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@953 -- # uname 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60655 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.545 killing process with pid 60655 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60655' 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@967 -- # kill 60655 00:07:37.545 18:33:11 rpc -- common/autotest_common.sh@972 -- # wait 60655 00:07:38.111 00:07:38.111 real 0m3.251s 00:07:38.111 user 0m3.969s 00:07:38.111 sys 0m0.974s 00:07:38.111 18:33:12 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.111 18:33:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.111 ************************************ 00:07:38.111 END TEST rpc 00:07:38.111 ************************************ 00:07:38.111 18:33:12 -- common/autotest_common.sh@1142 -- # return 0 00:07:38.111 18:33:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:38.111 18:33:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.111 18:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.111 18:33:12 -- common/autotest_common.sh@10 -- # set +x 00:07:38.111 ************************************ 00:07:38.111 START TEST skip_rpc 00:07:38.111 ************************************ 00:07:38.111 18:33:12 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:38.368 * Looking for test storage... 00:07:38.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:38.368 18:33:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:38.368 18:33:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:38.368 18:33:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:38.368 18:33:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.368 18:33:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.368 18:33:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.368 ************************************ 00:07:38.368 START TEST skip_rpc 00:07:38.368 ************************************ 00:07:38.368 18:33:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:07:38.368 18:33:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60927 00:07:38.368 18:33:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:38.368 18:33:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:38.368 18:33:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:38.368 [2024-07-15 18:33:12.715421] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:38.368 [2024-07-15 18:33:12.715542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60927 ] 00:07:38.626 [2024-07-15 18:33:12.857114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.626 [2024-07-15 18:33:13.029095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.937 2024/07/15 18:33:17 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60927 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60927 ']' 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60927 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60927 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.937 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60927' 00:07:43.938 killing process with pid 60927 00:07:43.938 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60927 00:07:43.938 18:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60927 00:07:43.938 00:07:43.938 real 0m5.385s 00:07:43.938 user 0m4.858s 00:07:43.938 sys 0m0.429s 00:07:43.938 18:33:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.938 18:33:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.938 ************************************ 00:07:43.938 END TEST skip_rpc 00:07:43.938 ************************************ 00:07:43.938 18:33:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:43.938 18:33:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:43.938 18:33:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.938 18:33:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.938 18:33:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.938 ************************************ 00:07:43.938 START TEST skip_rpc_with_json 00:07:43.938 ************************************ 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61014 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61014 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61014 ']' 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.938 18:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:43.938 [2024-07-15 18:33:18.135331] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:43.938 [2024-07-15 18:33:18.135434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61014 ] 00:07:43.938 [2024-07-15 18:33:18.269157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.938 [2024-07-15 18:33:18.374798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.871 [2024-07-15 18:33:19.184843] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:44.871 2024/07/15 18:33:19 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:07:44.871 request: 00:07:44.871 { 00:07:44.871 "method": "nvmf_get_transports", 00:07:44.871 "params": { 00:07:44.871 "trtype": "tcp" 00:07:44.871 } 00:07:44.871 } 00:07:44.871 Got JSON-RPC error response 00:07:44.871 GoRPCClient: error on JSON-RPC call 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.871 [2024-07-15 18:33:19.196937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.871 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.129 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:45.129 { 00:07:45.129 "subsystems": [ 00:07:45.129 { 00:07:45.129 "subsystem": "keyring", 00:07:45.129 "config": [] 00:07:45.129 }, 00:07:45.129 { 00:07:45.129 "subsystem": "iobuf", 00:07:45.129 "config": [ 00:07:45.129 { 00:07:45.129 "method": "iobuf_set_options", 00:07:45.129 "params": { 00:07:45.129 "large_bufsize": 135168, 00:07:45.129 "large_pool_count": 1024, 00:07:45.129 "small_bufsize": 8192, 00:07:45.129 "small_pool_count": 8192 00:07:45.129 } 00:07:45.129 } 00:07:45.129 ] 00:07:45.129 }, 00:07:45.129 { 00:07:45.129 "subsystem": "sock", 00:07:45.129 "config": [ 00:07:45.129 { 00:07:45.129 "method": "sock_set_default_impl", 00:07:45.129 "params": { 00:07:45.129 "impl_name": "posix" 00:07:45.129 } 00:07:45.129 }, 00:07:45.129 { 00:07:45.129 "method": "sock_impl_set_options", 00:07:45.129 "params": { 00:07:45.129 "enable_ktls": false, 00:07:45.129 "enable_placement_id": 0, 00:07:45.129 "enable_quickack": false, 00:07:45.129 "enable_recv_pipe": true, 00:07:45.129 "enable_zerocopy_send_client": false, 00:07:45.129 "enable_zerocopy_send_server": true, 00:07:45.129 "impl_name": "ssl", 00:07:45.129 "recv_buf_size": 4096, 00:07:45.129 "send_buf_size": 4096, 00:07:45.129 "tls_version": 0, 00:07:45.129 "zerocopy_threshold": 0 00:07:45.129 } 00:07:45.129 }, 00:07:45.129 { 00:07:45.129 "method": "sock_impl_set_options", 00:07:45.129 "params": { 00:07:45.129 "enable_ktls": false, 00:07:45.129 "enable_placement_id": 0, 00:07:45.129 "enable_quickack": false, 00:07:45.129 "enable_recv_pipe": true, 00:07:45.129 "enable_zerocopy_send_client": false, 00:07:45.129 "enable_zerocopy_send_server": true, 00:07:45.129 "impl_name": "posix", 00:07:45.129 "recv_buf_size": 2097152, 00:07:45.129 "send_buf_size": 2097152, 00:07:45.129 "tls_version": 0, 00:07:45.129 "zerocopy_threshold": 0 00:07:45.129 } 00:07:45.129 } 00:07:45.129 ] 00:07:45.129 }, 00:07:45.129 { 00:07:45.129 "subsystem": "vmd", 00:07:45.129 "config": [] 00:07:45.129 }, 00:07:45.129 { 00:07:45.129 "subsystem": "accel", 00:07:45.129 "config": [ 00:07:45.129 { 00:07:45.129 "method": "accel_set_options", 00:07:45.129 "params": { 00:07:45.129 "buf_count": 2048, 00:07:45.129 "large_cache_size": 16, 00:07:45.129 "sequence_count": 2048, 00:07:45.129 "small_cache_size": 128, 00:07:45.129 "task_count": 2048 00:07:45.129 } 00:07:45.129 } 00:07:45.129 ] 00:07:45.129 }, 00:07:45.129 { 00:07:45.129 "subsystem": "bdev", 00:07:45.129 "config": [ 00:07:45.129 { 00:07:45.129 "method": "bdev_set_options", 00:07:45.129 "params": { 00:07:45.129 "bdev_auto_examine": true, 00:07:45.130 "bdev_io_cache_size": 256, 00:07:45.130 "bdev_io_pool_size": 65535, 00:07:45.130 "iobuf_large_cache_size": 16, 00:07:45.130 "iobuf_small_cache_size": 128 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "bdev_raid_set_options", 00:07:45.130 "params": { 00:07:45.130 "process_window_size_kb": 1024 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "bdev_iscsi_set_options", 00:07:45.130 "params": { 00:07:45.130 "timeout_sec": 30 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "bdev_nvme_set_options", 00:07:45.130 "params": { 00:07:45.130 "action_on_timeout": "none", 00:07:45.130 "allow_accel_sequence": false, 00:07:45.130 "arbitration_burst": 0, 00:07:45.130 "bdev_retry_count": 3, 00:07:45.130 "ctrlr_loss_timeout_sec": 0, 00:07:45.130 "delay_cmd_submit": true, 00:07:45.130 "dhchap_dhgroups": [ 00:07:45.130 "null", 00:07:45.130 "ffdhe2048", 00:07:45.130 "ffdhe3072", 00:07:45.130 "ffdhe4096", 00:07:45.130 "ffdhe6144", 00:07:45.130 "ffdhe8192" 00:07:45.130 ], 00:07:45.130 "dhchap_digests": [ 00:07:45.130 "sha256", 00:07:45.130 "sha384", 00:07:45.130 "sha512" 00:07:45.130 ], 00:07:45.130 "disable_auto_failback": false, 00:07:45.130 "fast_io_fail_timeout_sec": 0, 00:07:45.130 "generate_uuids": false, 00:07:45.130 "high_priority_weight": 0, 00:07:45.130 "io_path_stat": false, 00:07:45.130 "io_queue_requests": 0, 00:07:45.130 "keep_alive_timeout_ms": 10000, 00:07:45.130 "low_priority_weight": 0, 00:07:45.130 "medium_priority_weight": 0, 00:07:45.130 "nvme_adminq_poll_period_us": 10000, 00:07:45.130 "nvme_error_stat": false, 00:07:45.130 "nvme_ioq_poll_period_us": 0, 00:07:45.130 "rdma_cm_event_timeout_ms": 0, 00:07:45.130 "rdma_max_cq_size": 0, 00:07:45.130 "rdma_srq_size": 0, 00:07:45.130 "reconnect_delay_sec": 0, 00:07:45.130 "timeout_admin_us": 0, 00:07:45.130 "timeout_us": 0, 00:07:45.130 "transport_ack_timeout": 0, 00:07:45.130 "transport_retry_count": 4, 00:07:45.130 "transport_tos": 0 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "bdev_nvme_set_hotplug", 00:07:45.130 "params": { 00:07:45.130 "enable": false, 00:07:45.130 "period_us": 100000 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "bdev_wait_for_examine" 00:07:45.130 } 00:07:45.130 ] 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "scsi", 00:07:45.130 "config": null 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "scheduler", 00:07:45.130 "config": [ 00:07:45.130 { 00:07:45.130 "method": "framework_set_scheduler", 00:07:45.130 "params": { 00:07:45.130 "name": "static" 00:07:45.130 } 00:07:45.130 } 00:07:45.130 ] 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "vhost_scsi", 00:07:45.130 "config": [] 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "vhost_blk", 00:07:45.130 "config": [] 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "ublk", 00:07:45.130 "config": [] 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "nbd", 00:07:45.130 "config": [] 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "nvmf", 00:07:45.130 "config": [ 00:07:45.130 { 00:07:45.130 "method": "nvmf_set_config", 00:07:45.130 "params": { 00:07:45.130 "admin_cmd_passthru": { 00:07:45.130 "identify_ctrlr": false 00:07:45.130 }, 00:07:45.130 "discovery_filter": "match_any" 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "nvmf_set_max_subsystems", 00:07:45.130 "params": { 00:07:45.130 "max_subsystems": 1024 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "nvmf_set_crdt", 00:07:45.130 "params": { 00:07:45.130 "crdt1": 0, 00:07:45.130 "crdt2": 0, 00:07:45.130 "crdt3": 0 00:07:45.130 } 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "method": "nvmf_create_transport", 00:07:45.130 "params": { 00:07:45.130 "abort_timeout_sec": 1, 00:07:45.130 "ack_timeout": 0, 00:07:45.130 "buf_cache_size": 4294967295, 00:07:45.130 "c2h_success": true, 00:07:45.130 "data_wr_pool_size": 0, 00:07:45.130 "dif_insert_or_strip": false, 00:07:45.130 "in_capsule_data_size": 4096, 00:07:45.130 "io_unit_size": 131072, 00:07:45.130 "max_aq_depth": 128, 00:07:45.130 "max_io_qpairs_per_ctrlr": 127, 00:07:45.130 "max_io_size": 131072, 00:07:45.130 "max_queue_depth": 128, 00:07:45.130 "num_shared_buffers": 511, 00:07:45.130 "sock_priority": 0, 00:07:45.130 "trtype": "TCP", 00:07:45.130 "zcopy": false 00:07:45.130 } 00:07:45.130 } 00:07:45.130 ] 00:07:45.130 }, 00:07:45.130 { 00:07:45.130 "subsystem": "iscsi", 00:07:45.130 "config": [ 00:07:45.130 { 00:07:45.130 "method": "iscsi_set_options", 00:07:45.130 "params": { 00:07:45.130 "allow_duplicated_isid": false, 00:07:45.130 "chap_group": 0, 00:07:45.130 "data_out_pool_size": 2048, 00:07:45.130 "default_time2retain": 20, 00:07:45.130 "default_time2wait": 2, 00:07:45.130 "disable_chap": false, 00:07:45.130 "error_recovery_level": 0, 00:07:45.130 "first_burst_length": 8192, 00:07:45.130 "immediate_data": true, 00:07:45.130 "immediate_data_pool_size": 16384, 00:07:45.130 "max_connections_per_session": 2, 00:07:45.130 "max_large_datain_per_connection": 64, 00:07:45.130 "max_queue_depth": 64, 00:07:45.130 "max_r2t_per_connection": 4, 00:07:45.130 "max_sessions": 128, 00:07:45.130 "mutual_chap": false, 00:07:45.130 "node_base": "iqn.2016-06.io.spdk", 00:07:45.130 "nop_in_interval": 30, 00:07:45.130 "nop_timeout": 60, 00:07:45.130 "pdu_pool_size": 36864, 00:07:45.130 "require_chap": false 00:07:45.130 } 00:07:45.130 } 00:07:45.130 ] 00:07:45.130 } 00:07:45.130 ] 00:07:45.130 } 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61014 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61014 ']' 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61014 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61014 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.130 killing process with pid 61014 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61014' 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61014 00:07:45.130 18:33:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61014 00:07:45.391 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:45.391 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61054 00:07:45.391 18:33:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61054 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61054 ']' 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61054 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61054 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.701 killing process with pid 61054 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61054' 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61054 00:07:50.701 18:33:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61054 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:50.701 00:07:50.701 real 0m7.031s 00:07:50.701 user 0m6.881s 00:07:50.701 sys 0m0.608s 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:50.701 ************************************ 00:07:50.701 END TEST skip_rpc_with_json 00:07:50.701 ************************************ 00:07:50.701 18:33:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:50.701 18:33:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:50.701 18:33:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.701 18:33:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.701 18:33:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.701 ************************************ 00:07:50.701 START TEST skip_rpc_with_delay 00:07:50.701 ************************************ 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:50.701 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:50.959 [2024-07-15 18:33:25.236930] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:50.959 [2024-07-15 18:33:25.237116] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:50.959 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:50.959 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.959 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.959 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.959 00:07:50.959 real 0m0.092s 00:07:50.959 user 0m0.053s 00:07:50.959 sys 0m0.038s 00:07:50.959 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.959 18:33:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:50.959 ************************************ 00:07:50.959 END TEST skip_rpc_with_delay 00:07:50.959 ************************************ 00:07:50.959 18:33:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:50.959 18:33:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:50.959 18:33:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:50.959 18:33:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:50.959 18:33:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.959 18:33:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.959 18:33:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.959 ************************************ 00:07:50.959 START TEST exit_on_failed_rpc_init 00:07:50.959 ************************************ 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61163 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61163 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61163 ']' 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.959 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:50.959 [2024-07-15 18:33:25.376163] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:50.959 [2024-07-15 18:33:25.376253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61163 ] 00:07:51.217 [2024-07-15 18:33:25.511984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.217 [2024-07-15 18:33:25.619580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:51.475 18:33:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:51.475 [2024-07-15 18:33:25.928217] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:51.475 [2024-07-15 18:33:25.928320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61180 ] 00:07:51.738 [2024-07-15 18:33:26.073396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.738 [2024-07-15 18:33:26.187684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.738 [2024-07-15 18:33:26.187794] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:51.738 [2024-07-15 18:33:26.187807] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:51.738 [2024-07-15 18:33:26.187817] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61163 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61163 ']' 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61163 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61163 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.998 killing process with pid 61163 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61163' 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61163 00:07:51.998 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61163 00:07:52.257 00:07:52.257 real 0m1.324s 00:07:52.257 user 0m1.526s 00:07:52.257 sys 0m0.377s 00:07:52.257 ************************************ 00:07:52.257 END TEST exit_on_failed_rpc_init 00:07:52.257 ************************************ 00:07:52.257 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.257 18:33:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:52.257 18:33:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:52.257 18:33:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:52.257 00:07:52.257 real 0m14.167s 00:07:52.257 user 0m13.429s 00:07:52.257 sys 0m1.674s 00:07:52.257 18:33:26 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.257 18:33:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.257 ************************************ 00:07:52.257 END TEST skip_rpc 00:07:52.257 ************************************ 00:07:52.257 18:33:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:52.257 18:33:26 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:52.257 18:33:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.257 18:33:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.257 18:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:52.515 ************************************ 00:07:52.515 START TEST rpc_client 00:07:52.515 ************************************ 00:07:52.515 18:33:26 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:52.515 * Looking for test storage... 00:07:52.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:52.515 18:33:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:52.515 OK 00:07:52.515 18:33:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:52.515 00:07:52.515 real 0m0.110s 00:07:52.515 user 0m0.049s 00:07:52.515 sys 0m0.067s 00:07:52.515 18:33:26 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.515 ************************************ 00:07:52.515 END TEST rpc_client 00:07:52.515 ************************************ 00:07:52.515 18:33:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:52.515 18:33:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:52.515 18:33:26 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:52.515 18:33:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.515 18:33:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.515 18:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:52.515 ************************************ 00:07:52.515 START TEST json_config 00:07:52.515 ************************************ 00:07:52.515 18:33:26 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.515 18:33:26 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.515 18:33:26 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.515 18:33:26 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.515 18:33:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.515 18:33:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.515 18:33:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.515 18:33:26 json_config -- paths/export.sh@5 -- # export PATH 00:07:52.515 18:33:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@47 -- # : 0 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.515 18:33:26 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:52.515 18:33:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:52.516 18:33:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:52.516 18:33:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:52.516 18:33:26 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:52.516 INFO: JSON configuration test init 00:07:52.516 18:33:26 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:52.516 18:33:26 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:52.516 18:33:26 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:52.516 18:33:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.516 18:33:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.516 18:33:26 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:52.516 18:33:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.516 18:33:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.775 18:33:26 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:52.775 18:33:27 json_config -- json_config/common.sh@9 -- # local app=target 00:07:52.775 18:33:27 json_config -- json_config/common.sh@10 -- # shift 00:07:52.775 18:33:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:52.775 18:33:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:52.775 18:33:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:52.775 18:33:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:52.775 18:33:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:52.775 18:33:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61300 00:07:52.775 Waiting for target to run... 00:07:52.775 18:33:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:52.775 18:33:27 json_config -- json_config/common.sh@25 -- # waitforlisten 61300 /var/tmp/spdk_tgt.sock 00:07:52.775 18:33:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:52.775 18:33:27 json_config -- common/autotest_common.sh@829 -- # '[' -z 61300 ']' 00:07:52.775 18:33:27 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:52.775 18:33:27 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:52.775 18:33:27 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:52.775 18:33:27 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.775 18:33:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.775 [2024-07-15 18:33:27.052829] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:52.775 [2024-07-15 18:33:27.052924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61300 ] 00:07:53.033 [2024-07-15 18:33:27.424742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.033 [2024-07-15 18:33:27.508482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.598 18:33:28 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.598 18:33:28 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:53.598 00:07:53.598 18:33:28 json_config -- json_config/common.sh@26 -- # echo '' 00:07:53.598 18:33:28 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:53.598 18:33:28 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:53.598 18:33:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.598 18:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.598 18:33:28 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:53.598 18:33:28 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:53.598 18:33:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.598 18:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.856 18:33:28 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:53.856 18:33:28 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:53.856 18:33:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:54.420 18:33:28 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:54.420 18:33:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:54.420 18:33:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.420 18:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.420 18:33:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:54.420 18:33:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:54.420 18:33:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:54.420 18:33:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:54.420 18:33:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:54.420 18:33:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:54.679 18:33:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:54.679 18:33:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:54.679 18:33:28 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:54.679 18:33:28 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:54.679 18:33:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.679 18:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:07:54.679 18:33:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.679 18:33:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:07:54.679 18:33:29 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:54.679 18:33:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:54.937 MallocForNvmf0 00:07:54.937 18:33:29 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:54.937 18:33:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:55.196 MallocForNvmf1 00:07:55.196 18:33:29 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:55.196 18:33:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:55.455 [2024-07-15 18:33:29.809642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.455 18:33:29 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.455 18:33:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.713 18:33:30 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:55.713 18:33:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:55.972 18:33:30 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:55.972 18:33:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:56.231 18:33:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:56.231 18:33:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:56.489 [2024-07-15 18:33:30.750131] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:56.489 18:33:30 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:07:56.489 18:33:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.489 18:33:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.489 18:33:30 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:56.489 18:33:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.489 18:33:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.489 18:33:30 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:07:56.489 18:33:30 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:56.490 18:33:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:56.748 MallocBdevForConfigChangeCheck 00:07:56.748 18:33:31 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:07:56.748 18:33:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.748 18:33:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.748 18:33:31 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:07:56.748 18:33:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:57.316 INFO: shutting down applications... 00:07:57.316 18:33:31 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:07:57.316 18:33:31 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:07:57.316 18:33:31 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:07:57.316 18:33:31 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:07:57.316 18:33:31 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:57.574 Calling clear_iscsi_subsystem 00:07:57.574 Calling clear_nvmf_subsystem 00:07:57.574 Calling clear_nbd_subsystem 00:07:57.574 Calling clear_ublk_subsystem 00:07:57.574 Calling clear_vhost_blk_subsystem 00:07:57.574 Calling clear_vhost_scsi_subsystem 00:07:57.574 Calling clear_bdev_subsystem 00:07:57.574 18:33:31 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:57.574 18:33:31 json_config -- json_config/json_config.sh@343 -- # count=100 00:07:57.574 18:33:31 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:07:57.574 18:33:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:57.574 18:33:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:57.575 18:33:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:58.141 18:33:32 json_config -- json_config/json_config.sh@345 -- # break 00:07:58.141 18:33:32 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:07:58.141 18:33:32 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:07:58.141 18:33:32 json_config -- json_config/common.sh@31 -- # local app=target 00:07:58.141 18:33:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:58.141 18:33:32 json_config -- json_config/common.sh@35 -- # [[ -n 61300 ]] 00:07:58.141 18:33:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61300 00:07:58.141 18:33:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:58.141 18:33:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:58.141 18:33:32 json_config -- json_config/common.sh@41 -- # kill -0 61300 00:07:58.141 18:33:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:58.707 18:33:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:58.707 18:33:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:58.707 18:33:32 json_config -- json_config/common.sh@41 -- # kill -0 61300 00:07:58.707 18:33:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:58.707 18:33:32 json_config -- json_config/common.sh@43 -- # break 00:07:58.707 18:33:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:58.707 SPDK target shutdown done 00:07:58.707 INFO: relaunching applications... 00:07:58.707 18:33:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:58.707 18:33:32 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:07:58.707 18:33:32 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:58.707 18:33:32 json_config -- json_config/common.sh@9 -- # local app=target 00:07:58.707 18:33:32 json_config -- json_config/common.sh@10 -- # shift 00:07:58.707 18:33:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:58.707 18:33:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:58.707 18:33:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:58.707 18:33:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:58.707 18:33:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:58.707 18:33:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61573 00:07:58.707 18:33:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:58.707 Waiting for target to run... 00:07:58.707 18:33:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:58.707 18:33:32 json_config -- json_config/common.sh@25 -- # waitforlisten 61573 /var/tmp/spdk_tgt.sock 00:07:58.707 18:33:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 61573 ']' 00:07:58.707 18:33:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:58.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:58.707 18:33:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.707 18:33:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:58.707 18:33:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.707 18:33:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.707 [2024-07-15 18:33:32.981611] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:58.707 [2024-07-15 18:33:32.981707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61573 ] 00:07:58.965 [2024-07-15 18:33:33.349342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.965 [2024-07-15 18:33:33.448288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.532 [2024-07-15 18:33:33.783111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.532 [2024-07-15 18:33:33.815179] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:59.791 00:07:59.791 INFO: Checking if target configuration is the same... 00:07:59.791 18:33:34 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.791 18:33:34 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:59.791 18:33:34 json_config -- json_config/common.sh@26 -- # echo '' 00:07:59.791 18:33:34 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:07:59.791 18:33:34 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:59.791 18:33:34 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:59.791 18:33:34 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:07:59.791 18:33:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:59.791 + '[' 2 -ne 2 ']' 00:07:59.792 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:59.792 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:59.792 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:59.792 +++ basename /dev/fd/62 00:07:59.792 ++ mktemp /tmp/62.XXX 00:07:59.792 + tmp_file_1=/tmp/62.9Lg 00:07:59.792 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:59.792 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:59.792 + tmp_file_2=/tmp/spdk_tgt_config.json.wrF 00:07:59.792 + ret=0 00:07:59.792 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:00.051 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:00.308 + diff -u /tmp/62.9Lg /tmp/spdk_tgt_config.json.wrF 00:08:00.308 INFO: JSON config files are the same 00:08:00.308 + echo 'INFO: JSON config files are the same' 00:08:00.308 + rm /tmp/62.9Lg /tmp/spdk_tgt_config.json.wrF 00:08:00.308 + exit 0 00:08:00.308 INFO: changing configuration and checking if this can be detected... 00:08:00.308 18:33:34 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:08:00.308 18:33:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:00.308 18:33:34 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:00.308 18:33:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:00.566 18:33:34 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:00.566 18:33:34 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:08:00.566 18:33:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:00.566 + '[' 2 -ne 2 ']' 00:08:00.566 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:00.566 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:00.566 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:00.566 +++ basename /dev/fd/62 00:08:00.566 ++ mktemp /tmp/62.XXX 00:08:00.566 + tmp_file_1=/tmp/62.zEm 00:08:00.566 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:00.566 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:00.566 + tmp_file_2=/tmp/spdk_tgt_config.json.FwI 00:08:00.566 + ret=0 00:08:00.566 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:00.823 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:01.082 + diff -u /tmp/62.zEm /tmp/spdk_tgt_config.json.FwI 00:08:01.082 + ret=1 00:08:01.082 + echo '=== Start of file: /tmp/62.zEm ===' 00:08:01.082 + cat /tmp/62.zEm 00:08:01.082 + echo '=== End of file: /tmp/62.zEm ===' 00:08:01.082 + echo '' 00:08:01.082 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FwI ===' 00:08:01.082 + cat /tmp/spdk_tgt_config.json.FwI 00:08:01.082 + echo '=== End of file: /tmp/spdk_tgt_config.json.FwI ===' 00:08:01.082 + echo '' 00:08:01.082 + rm /tmp/62.zEm /tmp/spdk_tgt_config.json.FwI 00:08:01.082 + exit 1 00:08:01.082 INFO: configuration change detected. 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@317 -- # [[ -n 61573 ]] 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@193 -- # uname -s 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 18:33:35 json_config -- json_config/json_config.sh@323 -- # killprocess 61573 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@948 -- # '[' -z 61573 ']' 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@952 -- # kill -0 61573 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@953 -- # uname 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61573 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:01.082 killing process with pid 61573 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61573' 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@967 -- # kill 61573 00:08:01.082 18:33:35 json_config -- common/autotest_common.sh@972 -- # wait 61573 00:08:01.340 18:33:35 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:01.340 18:33:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:08:01.340 18:33:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.340 18:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.340 INFO: Success 00:08:01.340 18:33:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:08:01.340 18:33:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:08:01.340 00:08:01.340 real 0m8.844s 00:08:01.340 user 0m12.765s 00:08:01.340 sys 0m2.026s 00:08:01.340 ************************************ 00:08:01.340 END TEST json_config 00:08:01.340 ************************************ 00:08:01.340 18:33:35 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.340 18:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.341 18:33:35 -- common/autotest_common.sh@1142 -- # return 0 00:08:01.341 18:33:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:01.341 18:33:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.341 18:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.341 18:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:01.341 ************************************ 00:08:01.341 START TEST json_config_extra_key 00:08:01.341 ************************************ 00:08:01.341 18:33:35 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.599 18:33:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.599 18:33:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.599 18:33:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.599 18:33:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.599 18:33:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.599 18:33:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.599 18:33:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:01.599 18:33:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.599 18:33:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:01.599 INFO: launching applications... 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:01.599 18:33:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:01.599 Waiting for target to run... 00:08:01.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61750 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61750 /var/tmp/spdk_tgt.sock 00:08:01.599 18:33:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:01.599 18:33:35 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61750 ']' 00:08:01.599 18:33:35 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:01.599 18:33:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.599 18:33:35 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:01.599 18:33:35 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.599 18:33:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:01.599 [2024-07-15 18:33:35.978058] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:01.599 [2024-07-15 18:33:35.978395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61750 ] 00:08:02.164 [2024-07-15 18:33:36.368839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.164 [2024-07-15 18:33:36.463785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.728 18:33:37 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.728 18:33:37 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:08:02.728 00:08:02.728 INFO: shutting down applications... 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:02.728 18:33:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:02.728 18:33:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61750 ]] 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61750 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61750 00:08:02.728 18:33:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:03.293 18:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:03.293 18:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:03.293 SPDK target shutdown done 00:08:03.293 Success 00:08:03.293 18:33:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61750 00:08:03.293 18:33:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:03.293 18:33:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:03.293 18:33:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:03.293 18:33:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:03.293 18:33:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:03.293 ************************************ 00:08:03.293 END TEST json_config_extra_key 00:08:03.293 ************************************ 00:08:03.293 00:08:03.293 real 0m1.705s 00:08:03.293 user 0m1.560s 00:08:03.293 sys 0m0.454s 00:08:03.293 18:33:37 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.293 18:33:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 18:33:37 -- common/autotest_common.sh@1142 -- # return 0 00:08:03.293 18:33:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:03.293 18:33:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.293 18:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.293 18:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 ************************************ 00:08:03.293 START TEST alias_rpc 00:08:03.293 ************************************ 00:08:03.293 18:33:37 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:03.293 * Looking for test storage... 00:08:03.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:03.293 18:33:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:03.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.293 18:33:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61832 00:08:03.293 18:33:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61832 00:08:03.293 18:33:37 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61832 ']' 00:08:03.293 18:33:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:03.293 18:33:37 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.293 18:33:37 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.293 18:33:37 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.293 18:33:37 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.293 18:33:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 [2024-07-15 18:33:37.763539] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:03.293 [2024-07-15 18:33:37.763708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61832 ] 00:08:03.551 [2024-07-15 18:33:37.910981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.551 [2024-07-15 18:33:38.029823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.485 18:33:38 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.485 18:33:38 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:04.485 18:33:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:04.744 18:33:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61832 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61832 ']' 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61832 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61832 00:08:04.744 killing process with pid 61832 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61832' 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@967 -- # kill 61832 00:08:04.744 18:33:39 alias_rpc -- common/autotest_common.sh@972 -- # wait 61832 00:08:05.034 ************************************ 00:08:05.034 END TEST alias_rpc 00:08:05.034 ************************************ 00:08:05.034 00:08:05.034 real 0m1.894s 00:08:05.034 user 0m2.202s 00:08:05.034 sys 0m0.496s 00:08:05.034 18:33:39 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.034 18:33:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.321 18:33:39 -- common/autotest_common.sh@1142 -- # return 0 00:08:05.321 18:33:39 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:08:05.321 18:33:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.321 18:33:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.321 18:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.321 18:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.322 ************************************ 00:08:05.322 START TEST dpdk_mem_utility 00:08:05.322 ************************************ 00:08:05.322 18:33:39 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.322 * Looking for test storage... 00:08:05.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:05.322 18:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:05.322 18:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61919 00:08:05.322 18:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.322 18:33:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61919 00:08:05.322 18:33:39 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61919 ']' 00:08:05.322 18:33:39 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.322 18:33:39 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.322 18:33:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.322 18:33:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.322 18:33:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:05.322 [2024-07-15 18:33:39.670143] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:05.322 [2024-07-15 18:33:39.670221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61919 ] 00:08:05.581 [2024-07-15 18:33:39.808360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.581 [2024-07-15 18:33:39.928748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.520 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.520 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:08:06.520 18:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:06.520 18:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:06.520 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.520 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:06.520 { 00:08:06.520 "filename": "/tmp/spdk_mem_dump.txt" 00:08:06.520 } 00:08:06.520 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.520 18:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:06.520 DPDK memory size 814.000000 MiB in 1 heap(s) 00:08:06.520 1 heaps totaling size 814.000000 MiB 00:08:06.520 size: 814.000000 MiB heap id: 0 00:08:06.520 end heaps---------- 00:08:06.520 8 mempools totaling size 598.116089 MiB 00:08:06.520 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:06.520 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:06.520 size: 84.521057 MiB name: bdev_io_61919 00:08:06.520 size: 51.011292 MiB name: evtpool_61919 00:08:06.520 size: 50.003479 MiB name: msgpool_61919 00:08:06.520 size: 21.763794 MiB name: PDU_Pool 00:08:06.520 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:06.520 size: 0.026123 MiB name: Session_Pool 00:08:06.520 end mempools------- 00:08:06.520 6 memzones totaling size 4.142822 MiB 00:08:06.520 size: 1.000366 MiB name: RG_ring_0_61919 00:08:06.520 size: 1.000366 MiB name: RG_ring_1_61919 00:08:06.520 size: 1.000366 MiB name: RG_ring_4_61919 00:08:06.520 size: 1.000366 MiB name: RG_ring_5_61919 00:08:06.520 size: 0.125366 MiB name: RG_ring_2_61919 00:08:06.520 size: 0.015991 MiB name: RG_ring_3_61919 00:08:06.520 end memzones------- 00:08:06.520 18:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:06.520 heap id: 0 total size: 814.000000 MiB number of busy elements: 212 number of free elements: 15 00:08:06.520 list of free elements. size: 12.488037 MiB 00:08:06.520 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:06.520 element at address: 0x200018e00000 with size: 0.999878 MiB 00:08:06.520 element at address: 0x200019000000 with size: 0.999878 MiB 00:08:06.520 element at address: 0x200003e00000 with size: 0.996277 MiB 00:08:06.520 element at address: 0x200031c00000 with size: 0.994446 MiB 00:08:06.520 element at address: 0x200013800000 with size: 0.978699 MiB 00:08:06.520 element at address: 0x200007000000 with size: 0.959839 MiB 00:08:06.520 element at address: 0x200019200000 with size: 0.936584 MiB 00:08:06.520 element at address: 0x200000200000 with size: 0.837036 MiB 00:08:06.520 element at address: 0x20001aa00000 with size: 0.572449 MiB 00:08:06.520 element at address: 0x20000b200000 with size: 0.489990 MiB 00:08:06.520 element at address: 0x200000800000 with size: 0.487061 MiB 00:08:06.520 element at address: 0x200019400000 with size: 0.485657 MiB 00:08:06.520 element at address: 0x200027e00000 with size: 0.399048 MiB 00:08:06.520 element at address: 0x200003a00000 with size: 0.351685 MiB 00:08:06.520 list of standard malloc elements. size: 199.249390 MiB 00:08:06.520 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:08:06.520 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:08:06.520 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:06.520 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:08:06.520 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:06.520 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:06.520 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:08:06.520 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:06.520 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:08:06.520 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:08:06.520 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003adb300 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003adb500 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003affa80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:08:06.521 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e66280 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e66340 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6cf40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:08:06.521 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:08:06.522 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:08:06.522 list of memzone associated elements. size: 602.262573 MiB 00:08:06.522 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:08:06.522 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:06.522 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:08:06.522 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:06.522 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:08:06.522 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61919_0 00:08:06.522 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:06.522 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61919_0 00:08:06.522 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:06.522 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61919_0 00:08:06.522 element at address: 0x2000195be940 with size: 20.255554 MiB 00:08:06.522 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:06.522 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:08:06.522 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:06.522 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:06.522 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61919 00:08:06.522 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:06.522 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61919 00:08:06.522 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:06.522 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61919 00:08:06.522 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:08:06.522 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:06.522 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:08:06.522 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:06.522 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:08:06.522 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:06.522 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:08:06.522 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:06.522 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:06.522 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61919 00:08:06.522 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:06.522 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61919 00:08:06.522 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:08:06.522 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61919 00:08:06.522 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:08:06.522 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61919 00:08:06.522 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:08:06.522 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61919 00:08:06.522 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:08:06.522 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:06.522 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:08:06.522 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:06.522 element at address: 0x20001947c540 with size: 0.250488 MiB 00:08:06.522 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:06.522 element at address: 0x200003adf880 with size: 0.125488 MiB 00:08:06.522 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61919 00:08:06.522 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:08:06.522 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:06.522 element at address: 0x200027e66400 with size: 0.023743 MiB 00:08:06.522 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:06.522 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:08:06.522 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61919 00:08:06.522 element at address: 0x200027e6c540 with size: 0.002441 MiB 00:08:06.522 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:06.522 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:08:06.522 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61919 00:08:06.522 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:08:06.522 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61919 00:08:06.522 element at address: 0x200027e6d000 with size: 0.000305 MiB 00:08:06.522 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:06.522 18:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:06.522 18:33:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61919 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61919 ']' 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61919 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61919 00:08:06.522 killing process with pid 61919 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61919' 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61919 00:08:06.522 18:33:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61919 00:08:06.781 ************************************ 00:08:06.781 END TEST dpdk_mem_utility 00:08:06.781 ************************************ 00:08:06.781 00:08:06.781 real 0m1.649s 00:08:06.781 user 0m1.797s 00:08:06.781 sys 0m0.438s 00:08:06.781 18:33:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.781 18:33:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:06.781 18:33:41 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.781 18:33:41 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:06.781 18:33:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.781 18:33:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.781 18:33:41 -- common/autotest_common.sh@10 -- # set +x 00:08:06.781 ************************************ 00:08:06.781 START TEST event 00:08:06.781 ************************************ 00:08:06.781 18:33:41 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:07.040 * Looking for test storage... 00:08:07.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:07.040 18:33:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:07.040 18:33:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:07.040 18:33:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:07.040 18:33:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:07.040 18:33:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.040 18:33:41 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.040 ************************************ 00:08:07.040 START TEST event_perf 00:08:07.040 ************************************ 00:08:07.040 18:33:41 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:07.040 Running I/O for 1 seconds...[2024-07-15 18:33:41.352399] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:07.040 [2024-07-15 18:33:41.352475] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:08:07.040 [2024-07-15 18:33:41.491299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.299 [2024-07-15 18:33:41.620647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.300 [2024-07-15 18:33:41.620736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.300 [2024-07-15 18:33:41.620828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.300 [2024-07-15 18:33:41.620833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.234 Running I/O for 1 seconds... 00:08:08.234 lcore 0: 167645 00:08:08.234 lcore 1: 167645 00:08:08.234 lcore 2: 167645 00:08:08.234 lcore 3: 167646 00:08:08.234 done. 00:08:08.234 00:08:08.234 real 0m1.368s 00:08:08.234 user 0m4.173s 00:08:08.234 sys 0m0.066s 00:08:08.234 ************************************ 00:08:08.234 END TEST event_perf 00:08:08.234 ************************************ 00:08:08.234 18:33:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.234 18:33:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:08.493 18:33:42 event -- common/autotest_common.sh@1142 -- # return 0 00:08:08.493 18:33:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:08.493 18:33:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.493 18:33:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.493 18:33:42 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.493 ************************************ 00:08:08.493 START TEST event_reactor 00:08:08.493 ************************************ 00:08:08.493 18:33:42 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:08.493 [2024-07-15 18:33:42.775225] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:08.493 [2024-07-15 18:33:42.775331] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62047 ] 00:08:08.493 [2024-07-15 18:33:42.913426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.752 [2024-07-15 18:33:43.020039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.686 test_start 00:08:09.686 oneshot 00:08:09.686 tick 100 00:08:09.686 tick 100 00:08:09.686 tick 250 00:08:09.686 tick 100 00:08:09.686 tick 100 00:08:09.686 tick 100 00:08:09.686 tick 250 00:08:09.686 tick 500 00:08:09.686 tick 100 00:08:09.686 tick 100 00:08:09.686 tick 250 00:08:09.686 tick 100 00:08:09.686 tick 100 00:08:09.686 test_end 00:08:09.686 00:08:09.686 real 0m1.348s 00:08:09.686 user 0m1.182s 00:08:09.686 sys 0m0.058s 00:08:09.686 18:33:44 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.686 18:33:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:09.686 ************************************ 00:08:09.686 END TEST event_reactor 00:08:09.686 ************************************ 00:08:09.686 18:33:44 event -- common/autotest_common.sh@1142 -- # return 0 00:08:09.686 18:33:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:09.686 18:33:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.686 18:33:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.686 18:33:44 event -- common/autotest_common.sh@10 -- # set +x 00:08:09.686 ************************************ 00:08:09.686 START TEST event_reactor_perf 00:08:09.686 ************************************ 00:08:09.686 18:33:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:09.944 [2024-07-15 18:33:44.176723] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:09.944 [2024-07-15 18:33:44.176841] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62082 ] 00:08:09.944 [2024-07-15 18:33:44.318805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.202 [2024-07-15 18:33:44.443076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.137 test_start 00:08:11.137 test_end 00:08:11.137 Performance: 366169 events per second 00:08:11.137 ************************************ 00:08:11.137 END TEST event_reactor_perf 00:08:11.137 ************************************ 00:08:11.137 00:08:11.137 real 0m1.367s 00:08:11.137 user 0m1.209s 00:08:11.137 sys 0m0.050s 00:08:11.137 18:33:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.137 18:33:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:11.137 18:33:45 event -- common/autotest_common.sh@1142 -- # return 0 00:08:11.137 18:33:45 event -- event/event.sh@49 -- # uname -s 00:08:11.137 18:33:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:11.137 18:33:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:11.137 18:33:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:11.137 18:33:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.137 18:33:45 event -- common/autotest_common.sh@10 -- # set +x 00:08:11.137 ************************************ 00:08:11.137 START TEST event_scheduler 00:08:11.137 ************************************ 00:08:11.137 18:33:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:11.395 * Looking for test storage... 00:08:11.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:11.395 18:33:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:11.395 18:33:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62144 00:08:11.395 18:33:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:11.395 18:33:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:11.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.395 18:33:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62144 00:08:11.395 18:33:45 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62144 ']' 00:08:11.395 18:33:45 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.395 18:33:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.395 18:33:45 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.395 18:33:45 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.395 18:33:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:11.395 [2024-07-15 18:33:45.719309] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:11.395 [2024-07-15 18:33:45.719633] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62144 ] 00:08:11.395 [2024-07-15 18:33:45.860406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.653 [2024-07-15 18:33:45.990383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.653 [2024-07-15 18:33:45.990557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.653 [2024-07-15 18:33:45.990566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.653 [2024-07-15 18:33:45.990465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.588 18:33:46 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.588 18:33:46 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:08:12.588 18:33:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:12.588 18:33:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.588 18:33:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:12.588 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.588 POWER: Cannot set governor of lcore 0 to userspace 00:08:12.588 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.588 POWER: Cannot set governor of lcore 0 to performance 00:08:12.588 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.588 POWER: Cannot set governor of lcore 0 to userspace 00:08:12.588 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.589 POWER: Cannot set governor of lcore 0 to userspace 00:08:12.589 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:12.589 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:12.589 POWER: Unable to set Power Management Environment for lcore 0 00:08:12.589 [2024-07-15 18:33:46.737209] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:12.589 [2024-07-15 18:33:46.737289] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:12.589 [2024-07-15 18:33:46.737328] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:12.589 [2024-07-15 18:33:46.737400] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:12.589 [2024-07-15 18:33:46.737466] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:12.589 [2024-07-15 18:33:46.737501] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:12.589 18:33:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:12.589 18:33:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 [2024-07-15 18:33:46.821012] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:12.589 18:33:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:12.589 18:33:46 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.589 18:33:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 ************************************ 00:08:12.589 START TEST scheduler_create_thread 00:08:12.589 ************************************ 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 2 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 3 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 4 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 5 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 6 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 7 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 8 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 9 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 10 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.589 18:33:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.961 18:33:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.962 18:33:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:13.962 18:33:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:13.962 18:33:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.962 18:33:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.335 ************************************ 00:08:15.335 END TEST scheduler_create_thread 00:08:15.335 ************************************ 00:08:15.336 18:33:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.336 00:08:15.336 real 0m2.615s 00:08:15.336 user 0m0.018s 00:08:15.336 sys 0m0.008s 00:08:15.336 18:33:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.336 18:33:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:08:15.336 18:33:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:15.336 18:33:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62144 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62144 ']' 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62144 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62144 00:08:15.336 killing process with pid 62144 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62144' 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62144 00:08:15.336 18:33:49 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62144 00:08:15.594 [2024-07-15 18:33:49.929153] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:15.853 00:08:15.853 real 0m4.567s 00:08:15.853 user 0m8.654s 00:08:15.853 sys 0m0.397s 00:08:15.853 18:33:50 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.853 ************************************ 00:08:15.853 END TEST event_scheduler 00:08:15.853 ************************************ 00:08:15.853 18:33:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:15.853 18:33:50 event -- common/autotest_common.sh@1142 -- # return 0 00:08:15.853 18:33:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:15.853 18:33:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:15.853 18:33:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.853 18:33:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.853 18:33:50 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.853 ************************************ 00:08:15.853 START TEST app_repeat 00:08:15.853 ************************************ 00:08:15.853 18:33:50 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62261 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:15.853 Process app_repeat pid: 62261 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62261' 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:15.853 spdk_app_start Round 0 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:15.853 18:33:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62261 /var/tmp/spdk-nbd.sock 00:08:15.853 18:33:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62261 ']' 00:08:15.853 18:33:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:15.853 18:33:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:15.853 18:33:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:15.853 18:33:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.853 18:33:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:15.853 [2024-07-15 18:33:50.244589] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:15.853 [2024-07-15 18:33:50.244671] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62261 ] 00:08:16.111 [2024-07-15 18:33:50.378765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.111 [2024-07-15 18:33:50.483200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.111 [2024-07-15 18:33:50.483207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.111 18:33:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.111 18:33:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:16.111 18:33:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:16.369 Malloc0 00:08:16.369 18:33:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:16.933 Malloc1 00:08:16.933 18:33:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.933 18:33:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:17.192 /dev/nbd0 00:08:17.192 18:33:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:17.192 18:33:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:17.192 1+0 records in 00:08:17.192 1+0 records out 00:08:17.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285935 s, 14.3 MB/s 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:17.192 18:33:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:17.192 18:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.192 18:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.192 18:33:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:17.451 /dev/nbd1 00:08:17.451 18:33:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:17.451 18:33:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:17.451 1+0 records in 00:08:17.451 1+0 records out 00:08:17.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276473 s, 14.8 MB/s 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:17.451 18:33:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:17.451 18:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.451 18:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.451 18:33:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:17.451 18:33:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.451 18:33:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:17.709 { 00:08:17.709 "bdev_name": "Malloc0", 00:08:17.709 "nbd_device": "/dev/nbd0" 00:08:17.709 }, 00:08:17.709 { 00:08:17.709 "bdev_name": "Malloc1", 00:08:17.709 "nbd_device": "/dev/nbd1" 00:08:17.709 } 00:08:17.709 ]' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:17.709 { 00:08:17.709 "bdev_name": "Malloc0", 00:08:17.709 "nbd_device": "/dev/nbd0" 00:08:17.709 }, 00:08:17.709 { 00:08:17.709 "bdev_name": "Malloc1", 00:08:17.709 "nbd_device": "/dev/nbd1" 00:08:17.709 } 00:08:17.709 ]' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:17.709 /dev/nbd1' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:17.709 /dev/nbd1' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:17.709 256+0 records in 00:08:17.709 256+0 records out 00:08:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775021 s, 135 MB/s 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:17.709 256+0 records in 00:08:17.709 256+0 records out 00:08:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288557 s, 36.3 MB/s 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:17.709 256+0 records in 00:08:17.709 256+0 records out 00:08:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322578 s, 32.5 MB/s 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.709 18:33:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:17.967 18:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.968 18:33:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.225 18:33:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.791 18:33:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:18.791 18:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:18.791 18:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:18.791 18:33:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:18.791 18:33:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:19.049 18:33:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:19.049 [2024-07-15 18:33:53.528781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.306 [2024-07-15 18:33:53.633421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.306 [2024-07-15 18:33:53.633423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.306 [2024-07-15 18:33:53.677600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:19.306 [2024-07-15 18:33:53.677654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:22.613 spdk_app_start Round 1 00:08:22.613 18:33:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:22.613 18:33:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:22.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:22.613 18:33:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62261 /var/tmp/spdk-nbd.sock 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62261 ']' 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.613 18:33:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:22.613 18:33:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.613 Malloc0 00:08:22.613 18:33:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.871 Malloc1 00:08:22.871 18:33:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.871 18:33:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.872 18:33:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:22.872 18:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.872 18:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.872 18:33:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:23.130 /dev/nbd0 00:08:23.130 18:33:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:23.130 18:33:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.130 1+0 records in 00:08:23.130 1+0 records out 00:08:23.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264983 s, 15.5 MB/s 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.130 18:33:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.131 18:33:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:23.131 18:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.131 18:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.131 18:33:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:23.388 /dev/nbd1 00:08:23.388 18:33:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:23.388 18:33:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.388 1+0 records in 00:08:23.388 1+0 records out 00:08:23.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002459 s, 16.7 MB/s 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.388 18:33:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:23.389 18:33:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.389 18:33:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.389 18:33:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:23.389 18:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.389 18:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.389 18:33:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:23.389 18:33:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.389 18:33:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.647 18:33:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.647 { 00:08:23.647 "bdev_name": "Malloc0", 00:08:23.647 "nbd_device": "/dev/nbd0" 00:08:23.647 }, 00:08:23.647 { 00:08:23.647 "bdev_name": "Malloc1", 00:08:23.647 "nbd_device": "/dev/nbd1" 00:08:23.647 } 00:08:23.647 ]' 00:08:23.647 18:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.647 { 00:08:23.647 "bdev_name": "Malloc0", 00:08:23.647 "nbd_device": "/dev/nbd0" 00:08:23.647 }, 00:08:23.647 { 00:08:23.647 "bdev_name": "Malloc1", 00:08:23.647 "nbd_device": "/dev/nbd1" 00:08:23.647 } 00:08:23.647 ]' 00:08:23.647 18:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.647 18:33:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:23.647 /dev/nbd1' 00:08:23.647 18:33:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.647 18:33:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:23.647 /dev/nbd1' 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:23.648 256+0 records in 00:08:23.648 256+0 records out 00:08:23.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672795 s, 156 MB/s 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:23.648 256+0 records in 00:08:23.648 256+0 records out 00:08:23.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303942 s, 34.5 MB/s 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:23.648 256+0 records in 00:08:23.648 256+0 records out 00:08:23.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320646 s, 32.7 MB/s 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.648 18:33:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.215 18:33:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.474 18:33:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:24.732 18:33:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:24.732 18:33:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:24.990 18:33:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:25.247 [2024-07-15 18:33:59.670371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.505 [2024-07-15 18:33:59.824459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.505 [2024-07-15 18:33:59.824467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.505 [2024-07-15 18:33:59.906817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:25.505 [2024-07-15 18:33:59.906882] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:28.034 18:34:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:28.034 spdk_app_start Round 2 00:08:28.034 18:34:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:28.034 18:34:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62261 /var/tmp/spdk-nbd.sock 00:08:28.034 18:34:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62261 ']' 00:08:28.034 18:34:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:28.034 18:34:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:28.034 18:34:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:28.034 18:34:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.034 18:34:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:28.293 18:34:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.293 18:34:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:28.293 18:34:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.553 Malloc0 00:08:28.553 18:34:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.811 Malloc1 00:08:28.811 18:34:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:28.811 18:34:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:29.069 /dev/nbd0 00:08:29.069 18:34:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:29.069 18:34:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.069 1+0 records in 00:08:29.069 1+0 records out 00:08:29.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267051 s, 15.3 MB/s 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:29.069 18:34:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:29.069 18:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.069 18:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.069 18:34:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:29.635 /dev/nbd1 00:08:29.635 18:34:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:29.635 18:34:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.635 1+0 records in 00:08:29.635 1+0 records out 00:08:29.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340652 s, 12.0 MB/s 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:29.635 18:34:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:29.635 18:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.635 18:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.635 18:34:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.635 18:34:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.635 18:34:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:29.893 { 00:08:29.893 "bdev_name": "Malloc0", 00:08:29.893 "nbd_device": "/dev/nbd0" 00:08:29.893 }, 00:08:29.893 { 00:08:29.893 "bdev_name": "Malloc1", 00:08:29.893 "nbd_device": "/dev/nbd1" 00:08:29.893 } 00:08:29.893 ]' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:29.893 { 00:08:29.893 "bdev_name": "Malloc0", 00:08:29.893 "nbd_device": "/dev/nbd0" 00:08:29.893 }, 00:08:29.893 { 00:08:29.893 "bdev_name": "Malloc1", 00:08:29.893 "nbd_device": "/dev/nbd1" 00:08:29.893 } 00:08:29.893 ]' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:29.893 /dev/nbd1' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:29.893 /dev/nbd1' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:29.893 256+0 records in 00:08:29.893 256+0 records out 00:08:29.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00801999 s, 131 MB/s 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:29.893 256+0 records in 00:08:29.893 256+0 records out 00:08:29.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307079 s, 34.1 MB/s 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.893 256+0 records in 00:08:29.893 256+0 records out 00:08:29.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0359519 s, 29.2 MB/s 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.893 18:34:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.894 18:34:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:29.894 18:34:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.894 18:34:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.151 18:34:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.409 18:34:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.668 18:34:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:30.668 18:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:30.668 18:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:30.926 18:34:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:30.926 18:34:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:30.926 18:34:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:31.184 [2024-07-15 18:34:05.616861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.442 [2024-07-15 18:34:05.769056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.442 [2024-07-15 18:34:05.769056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.442 [2024-07-15 18:34:05.848714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:31.442 [2024-07-15 18:34:05.848809] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:33.968 18:34:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62261 /var/tmp/spdk-nbd.sock 00:08:33.968 18:34:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62261 ']' 00:08:33.968 18:34:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:33.968 18:34:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:33.968 18:34:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:33.968 18:34:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.968 18:34:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:34.533 18:34:08 event.app_repeat -- event/event.sh@39 -- # killprocess 62261 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62261 ']' 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62261 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62261 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.533 killing process with pid 62261 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62261' 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62261 00:08:34.533 18:34:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62261 00:08:34.791 spdk_app_start is called in Round 0. 00:08:34.791 Shutdown signal received, stop current app iteration 00:08:34.791 Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 reinitialization... 00:08:34.791 spdk_app_start is called in Round 1. 00:08:34.791 Shutdown signal received, stop current app iteration 00:08:34.791 Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 reinitialization... 00:08:34.791 spdk_app_start is called in Round 2. 00:08:34.791 Shutdown signal received, stop current app iteration 00:08:34.791 Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 reinitialization... 00:08:34.791 spdk_app_start is called in Round 3. 00:08:34.791 Shutdown signal received, stop current app iteration 00:08:34.791 18:34:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:34.791 18:34:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:34.791 00:08:34.791 real 0m18.887s 00:08:34.791 user 0m41.524s 00:08:34.791 sys 0m3.707s 00:08:34.791 18:34:09 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.791 18:34:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.791 ************************************ 00:08:34.791 END TEST app_repeat 00:08:34.791 ************************************ 00:08:34.791 18:34:09 event -- common/autotest_common.sh@1142 -- # return 0 00:08:34.791 18:34:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:34.791 18:34:09 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:34.791 18:34:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.791 18:34:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.791 18:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.791 ************************************ 00:08:34.791 START TEST cpu_locks 00:08:34.791 ************************************ 00:08:34.791 18:34:09 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:34.791 * Looking for test storage... 00:08:34.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:34.791 18:34:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:34.791 18:34:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:34.791 18:34:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:34.791 18:34:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:34.791 18:34:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.791 18:34:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.791 18:34:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.791 ************************************ 00:08:34.791 START TEST default_locks 00:08:34.791 ************************************ 00:08:34.791 18:34:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:08:34.791 18:34:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62878 00:08:34.791 18:34:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:34.791 18:34:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62878 00:08:34.791 18:34:09 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62878 ']' 00:08:34.791 18:34:09 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.791 18:34:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.049 18:34:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.049 18:34:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.049 18:34:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.049 [2024-07-15 18:34:09.340454] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:35.049 [2024-07-15 18:34:09.340570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62878 ] 00:08:35.049 [2024-07-15 18:34:09.488491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.308 [2024-07-15 18:34:09.642803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.243 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.243 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:08:36.243 18:34:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62878 00:08:36.243 18:34:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:36.243 18:34:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62878 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62878 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62878 ']' 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62878 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62878 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.502 killing process with pid 62878 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62878' 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62878 00:08:36.502 18:34:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62878 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62878 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62878 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62878 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62878 ']' 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.068 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62878) - No such process 00:08:37.068 ERROR: process (pid: 62878) is no longer running 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:37.068 00:08:37.068 real 0m2.166s 00:08:37.068 user 0m2.211s 00:08:37.068 sys 0m0.759s 00:08:37.068 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.068 ************************************ 00:08:37.069 END TEST default_locks 00:08:37.069 ************************************ 00:08:37.069 18:34:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.069 18:34:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:37.069 18:34:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:37.069 18:34:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.069 18:34:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.069 18:34:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.069 ************************************ 00:08:37.069 START TEST default_locks_via_rpc 00:08:37.069 ************************************ 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62942 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62942 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62942 ']' 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.069 18:34:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 [2024-07-15 18:34:11.570087] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:37.326 [2024-07-15 18:34:11.570191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62942 ] 00:08:37.326 [2024-07-15 18:34:11.716478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.585 [2024-07-15 18:34:11.888638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:38.151 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:38.152 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:38.152 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.152 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 18:34:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.152 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62942 00:08:38.152 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62942 00:08:38.152 18:34:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62942 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62942 ']' 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62942 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62942 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.719 killing process with pid 62942 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62942' 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62942 00:08:38.719 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62942 00:08:39.285 00:08:39.285 real 0m2.178s 00:08:39.285 user 0m2.232s 00:08:39.285 sys 0m0.737s 00:08:39.285 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.285 ************************************ 00:08:39.285 END TEST default_locks_via_rpc 00:08:39.285 18:34:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.285 ************************************ 00:08:39.285 18:34:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:39.285 18:34:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:39.285 18:34:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.285 18:34:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.285 18:34:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.285 ************************************ 00:08:39.285 START TEST non_locking_app_on_locked_coremask 00:08:39.285 ************************************ 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63012 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63012 /var/tmp/spdk.sock 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63012 ']' 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.285 18:34:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.544 [2024-07-15 18:34:13.792746] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:39.544 [2024-07-15 18:34:13.792850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63012 ] 00:08:39.544 [2024-07-15 18:34:13.933009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.803 [2024-07-15 18:34:14.092531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:40.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63040 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63040 /var/tmp/spdk2.sock 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63040 ']' 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.369 18:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.627 [2024-07-15 18:34:14.887638] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:40.627 [2024-07-15 18:34:14.887742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63040 ] 00:08:40.627 [2024-07-15 18:34:15.039481] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:40.627 [2024-07-15 18:34:15.039554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.883 [2024-07-15 18:34:15.353741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.816 18:34:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.816 18:34:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:41.816 18:34:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63012 00:08:41.816 18:34:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63012 00:08:41.816 18:34:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63012 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63012 ']' 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63012 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63012 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.750 killing process with pid 63012 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63012' 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63012 00:08:42.750 18:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63012 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63040 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63040 ']' 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63040 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63040 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.124 killing process with pid 63040 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63040' 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63040 00:08:44.124 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63040 00:08:44.382 00:08:44.382 real 0m5.113s 00:08:44.382 user 0m5.463s 00:08:44.382 sys 0m1.532s 00:08:44.382 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.382 18:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 ************************************ 00:08:44.382 END TEST non_locking_app_on_locked_coremask 00:08:44.382 ************************************ 00:08:44.640 18:34:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:44.640 18:34:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:44.640 18:34:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:44.640 18:34:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.640 18:34:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.640 ************************************ 00:08:44.640 START TEST locking_app_on_unlocked_coremask 00:08:44.640 ************************************ 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63130 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63130 /var/tmp/spdk.sock 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63130 ']' 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.640 18:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.640 [2024-07-15 18:34:18.959942] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:44.640 [2024-07-15 18:34:18.960043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63130 ] 00:08:44.640 [2024-07-15 18:34:19.098021] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:44.640 [2024-07-15 18:34:19.098094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.898 [2024-07-15 18:34:19.256443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63158 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63158 /var/tmp/spdk2.sock 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63158 ']' 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:45.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.830 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:45.831 18:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:45.831 [2024-07-15 18:34:20.139976] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:45.831 [2024-07-15 18:34:20.140111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63158 ] 00:08:45.831 [2024-07-15 18:34:20.291270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.395 [2024-07-15 18:34:20.605018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.960 18:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.960 18:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:46.960 18:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63158 00:08:46.960 18:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63158 00:08:46.960 18:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63130 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63130 ']' 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63130 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63130 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:47.894 killing process with pid 63130 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63130' 00:08:47.894 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63130 00:08:48.152 18:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63130 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63158 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63158 ']' 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63158 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63158 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63158' 00:08:49.087 killing process with pid 63158 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63158 00:08:49.087 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63158 00:08:49.345 00:08:49.345 real 0m4.732s 00:08:49.345 user 0m5.070s 00:08:49.345 sys 0m1.636s 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.345 ************************************ 00:08:49.345 END TEST locking_app_on_unlocked_coremask 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.345 ************************************ 00:08:49.345 18:34:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:49.345 18:34:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:49.345 18:34:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.345 18:34:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.345 18:34:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.345 ************************************ 00:08:49.345 START TEST locking_app_on_locked_coremask 00:08:49.345 ************************************ 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63244 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63244 /var/tmp/spdk.sock 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63244 ']' 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.345 18:34:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.345 [2024-07-15 18:34:23.750830] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:49.345 [2024-07-15 18:34:23.750911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63244 ] 00:08:49.602 [2024-07-15 18:34:23.887184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.602 [2024-07-15 18:34:24.008047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63271 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63271 /var/tmp/spdk2.sock 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63271 /var/tmp/spdk2.sock 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63271 /var/tmp/spdk2.sock 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63271 ']' 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.536 18:34:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.536 [2024-07-15 18:34:24.775932] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:50.536 [2024-07-15 18:34:24.776027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63271 ] 00:08:50.536 [2024-07-15 18:34:24.913315] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63244 has claimed it. 00:08:50.536 [2024-07-15 18:34:24.913407] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:51.102 ERROR: process (pid: 63271) is no longer running 00:08:51.102 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63271) - No such process 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63244 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63244 00:08:51.102 18:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63244 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63244 ']' 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63244 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63244 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:51.671 killing process with pid 63244 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63244' 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63244 00:08:51.671 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63244 00:08:52.239 00:08:52.239 real 0m2.999s 00:08:52.239 user 0m3.442s 00:08:52.239 sys 0m0.733s 00:08:52.239 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.239 18:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 ************************************ 00:08:52.239 END TEST locking_app_on_locked_coremask 00:08:52.239 ************************************ 00:08:52.497 18:34:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:52.497 18:34:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:52.497 18:34:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:52.497 18:34:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.497 18:34:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.497 ************************************ 00:08:52.497 START TEST locking_overlapped_coremask 00:08:52.497 ************************************ 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63328 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63328 /var/tmp/spdk.sock 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63328 ']' 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.497 18:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.497 [2024-07-15 18:34:26.813648] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:52.497 [2024-07-15 18:34:26.813757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63328 ] 00:08:52.497 [2024-07-15 18:34:26.960322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.756 [2024-07-15 18:34:27.119576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.756 [2024-07-15 18:34:27.119707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.756 [2024-07-15 18:34:27.119696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63358 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63358 /var/tmp/spdk2.sock 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63358 /var/tmp/spdk2.sock 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:53.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63358 /var/tmp/spdk2.sock 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63358 ']' 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.689 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.690 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.690 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.690 18:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.690 [2024-07-15 18:34:27.945826] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:53.690 [2024-07-15 18:34:27.945926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63358 ] 00:08:53.690 [2024-07-15 18:34:28.091663] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63328 has claimed it. 00:08:53.690 [2024-07-15 18:34:28.091743] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:54.257 ERROR: process (pid: 63358) is no longer running 00:08:54.257 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63358) - No such process 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63328 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63328 ']' 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63328 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63328 00:08:54.257 killing process with pid 63328 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63328' 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63328 00:08:54.257 18:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63328 00:08:54.823 00:08:54.823 real 0m2.260s 00:08:54.823 user 0m6.225s 00:08:54.823 sys 0m0.472s 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 ************************************ 00:08:54.823 END TEST locking_overlapped_coremask 00:08:54.823 ************************************ 00:08:54.823 18:34:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:54.823 18:34:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:54.823 18:34:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.823 18:34:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.823 18:34:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 ************************************ 00:08:54.823 START TEST locking_overlapped_coremask_via_rpc 00:08:54.823 ************************************ 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63409 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63409 /var/tmp/spdk.sock 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63409 ']' 00:08:54.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.823 18:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 [2024-07-15 18:34:29.139184] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:54.823 [2024-07-15 18:34:29.139286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63409 ] 00:08:54.823 [2024-07-15 18:34:29.286424] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:54.823 [2024-07-15 18:34:29.286494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:55.081 [2024-07-15 18:34:29.395202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.081 [2024-07-15 18:34:29.395339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.081 [2024-07-15 18:34:29.395342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63439 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63439 /var/tmp/spdk2.sock 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63439 ']' 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.082 18:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 [2024-07-15 18:34:30.205867] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:56.082 [2024-07-15 18:34:30.206757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63439 ] 00:08:56.082 [2024-07-15 18:34:30.344282] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:56.082 [2024-07-15 18:34:30.344331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.356 [2024-07-15 18:34:30.559880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.356 [2024-07-15 18:34:30.563059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.356 [2024-07-15 18:34:30.563062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.924 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.924 [2024-07-15 18:34:31.137120] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63409 has claimed it. 00:08:56.925 2024/07/15 18:34:31 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:08:56.925 request: 00:08:56.925 { 00:08:56.925 "method": "framework_enable_cpumask_locks", 00:08:56.925 "params": {} 00:08:56.925 } 00:08:56.925 Got JSON-RPC error response 00:08:56.925 GoRPCClient: error on JSON-RPC call 00:08:56.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63409 /var/tmp/spdk.sock 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63409 ']' 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.925 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.183 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63439 /var/tmp/spdk2.sock 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63439 ']' 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:57.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:57.184 00:08:57.184 real 0m2.578s 00:08:57.184 user 0m1.256s 00:08:57.184 sys 0m0.261s 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.184 ************************************ 00:08:57.184 END TEST locking_overlapped_coremask_via_rpc 00:08:57.184 ************************************ 00:08:57.184 18:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:57.443 18:34:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:57.443 18:34:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63409 ]] 00:08:57.443 18:34:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63409 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63409 ']' 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63409 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63409 00:08:57.443 killing process with pid 63409 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63409' 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63409 00:08:57.443 18:34:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63409 00:08:58.011 18:34:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63439 ]] 00:08:58.011 18:34:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63439 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63439 ']' 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63439 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63439 00:08:58.011 killing process with pid 63439 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63439' 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63439 00:08:58.011 18:34:32 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63439 00:08:58.270 18:34:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:58.270 18:34:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:58.270 Process with pid 63409 is not found 00:08:58.270 Process with pid 63439 is not found 00:08:58.270 18:34:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63409 ]] 00:08:58.270 18:34:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63409 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63409 ']' 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63409 00:08:58.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63409) - No such process 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63409 is not found' 00:08:58.270 18:34:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63439 ]] 00:08:58.270 18:34:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63439 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63439 ']' 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63439 00:08:58.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63439) - No such process 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63439 is not found' 00:08:58.270 18:34:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:58.270 00:08:58.270 real 0m23.544s 00:08:58.270 user 0m38.837s 00:08:58.270 sys 0m6.978s 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.270 18:34:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.270 ************************************ 00:08:58.270 END TEST cpu_locks 00:08:58.270 ************************************ 00:08:58.270 18:34:32 event -- common/autotest_common.sh@1142 -- # return 0 00:08:58.270 00:08:58.270 real 0m51.516s 00:08:58.270 user 1m35.740s 00:08:58.270 sys 0m11.526s 00:08:58.270 18:34:32 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.270 18:34:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:58.270 ************************************ 00:08:58.270 END TEST event 00:08:58.270 ************************************ 00:08:58.556 18:34:32 -- common/autotest_common.sh@1142 -- # return 0 00:08:58.556 18:34:32 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:58.556 18:34:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.556 18:34:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.556 18:34:32 -- common/autotest_common.sh@10 -- # set +x 00:08:58.556 ************************************ 00:08:58.556 START TEST thread 00:08:58.556 ************************************ 00:08:58.556 18:34:32 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:58.556 * Looking for test storage... 00:08:58.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:58.556 18:34:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:58.556 18:34:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:58.556 18:34:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.556 18:34:32 thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.556 ************************************ 00:08:58.556 START TEST thread_poller_perf 00:08:58.556 ************************************ 00:08:58.556 18:34:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:58.556 [2024-07-15 18:34:32.933960] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:58.556 [2024-07-15 18:34:32.934061] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:08:58.814 [2024-07-15 18:34:33.067902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.814 [2024-07-15 18:34:33.225997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.814 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:00.243 ====================================== 00:09:00.243 busy:2111258872 (cyc) 00:09:00.243 total_run_count: 335000 00:09:00.243 tsc_hz: 2100000000 (cyc) 00:09:00.243 ====================================== 00:09:00.243 poller_cost: 6302 (cyc), 3000 (nsec) 00:09:00.243 00:09:00.243 real 0m1.445s 00:09:00.243 user 0m1.262s 00:09:00.243 sys 0m0.074s 00:09:00.243 18:34:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.243 18:34:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:00.243 ************************************ 00:09:00.243 END TEST thread_poller_perf 00:09:00.243 ************************************ 00:09:00.244 18:34:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:00.244 18:34:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:00.244 18:34:34 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:00.244 18:34:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.244 18:34:34 thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.244 ************************************ 00:09:00.244 START TEST thread_poller_perf 00:09:00.244 ************************************ 00:09:00.244 18:34:34 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:00.244 [2024-07-15 18:34:34.443017] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:00.244 [2024-07-15 18:34:34.443539] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63623 ] 00:09:00.244 [2024-07-15 18:34:34.589242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.502 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:00.502 [2024-07-15 18:34:34.744990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.435 ====================================== 00:09:01.435 busy:2102006284 (cyc) 00:09:01.435 total_run_count: 4288000 00:09:01.435 tsc_hz: 2100000000 (cyc) 00:09:01.435 ====================================== 00:09:01.435 poller_cost: 490 (cyc), 233 (nsec) 00:09:01.435 ************************************ 00:09:01.435 END TEST thread_poller_perf 00:09:01.435 ************************************ 00:09:01.435 00:09:01.435 real 0m1.446s 00:09:01.435 user 0m1.258s 00:09:01.435 sys 0m0.077s 00:09:01.435 18:34:35 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.435 18:34:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:01.693 18:34:35 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:01.693 18:34:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:01.693 ************************************ 00:09:01.693 END TEST thread 00:09:01.693 ************************************ 00:09:01.693 00:09:01.693 real 0m3.118s 00:09:01.693 user 0m2.608s 00:09:01.693 sys 0m0.292s 00:09:01.693 18:34:35 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.693 18:34:35 thread -- common/autotest_common.sh@10 -- # set +x 00:09:01.693 18:34:35 -- common/autotest_common.sh@1142 -- # return 0 00:09:01.693 18:34:35 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:01.693 18:34:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.693 18:34:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.693 18:34:35 -- common/autotest_common.sh@10 -- # set +x 00:09:01.693 ************************************ 00:09:01.693 START TEST accel 00:09:01.693 ************************************ 00:09:01.693 18:34:35 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:01.693 * Looking for test storage... 00:09:01.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:01.693 18:34:36 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:01.693 18:34:36 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:01.693 18:34:36 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:01.693 18:34:36 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63698 00:09:01.693 18:34:36 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:01.693 18:34:36 accel -- accel/accel.sh@63 -- # waitforlisten 63698 00:09:01.693 18:34:36 accel -- common/autotest_common.sh@829 -- # '[' -z 63698 ']' 00:09:01.693 18:34:36 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.693 18:34:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:01.693 18:34:36 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.693 18:34:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.693 18:34:36 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.693 18:34:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.693 18:34:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.693 18:34:36 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.693 18:34:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.693 18:34:36 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.693 18:34:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:01.693 18:34:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:01.693 18:34:36 accel -- accel/accel.sh@41 -- # jq -r . 00:09:01.693 [2024-07-15 18:34:36.140093] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:01.693 [2024-07-15 18:34:36.140195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63698 ] 00:09:01.951 [2024-07-15 18:34:36.278900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.951 [2024-07-15 18:34:36.434632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.888 18:34:37 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.888 18:34:37 accel -- common/autotest_common.sh@862 -- # return 0 00:09:02.888 18:34:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:02.888 18:34:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:02.888 18:34:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:02.888 18:34:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:02.888 18:34:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:02.889 18:34:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:02.889 18:34:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@10 -- # set +x 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:09:02.889 18:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:02.889 18:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:02.889 18:34:37 accel -- accel/accel.sh@75 -- # killprocess 63698 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@948 -- # '[' -z 63698 ']' 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@952 -- # kill -0 63698 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@953 -- # uname 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63698 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.889 killing process with pid 63698 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63698' 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@967 -- # kill 63698 00:09:02.889 18:34:37 accel -- common/autotest_common.sh@972 -- # wait 63698 00:09:03.457 18:34:37 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:03.457 18:34:37 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:03.457 18:34:37 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.457 18:34:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.457 18:34:37 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.457 18:34:37 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:03.457 18:34:37 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:03.457 18:34:37 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.457 18:34:37 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:03.457 18:34:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:03.457 18:34:37 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:03.457 18:34:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:03.457 18:34:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.457 18:34:37 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.715 ************************************ 00:09:03.715 START TEST accel_missing_filename 00:09:03.715 ************************************ 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.715 18:34:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:03.715 18:34:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:03.715 18:34:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:03.715 18:34:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.715 18:34:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.716 18:34:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.716 18:34:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.716 18:34:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.716 18:34:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:03.716 18:34:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:03.716 [2024-07-15 18:34:37.971664] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:03.716 [2024-07-15 18:34:37.971780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63773 ] 00:09:03.716 [2024-07-15 18:34:38.115207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.975 [2024-07-15 18:34:38.266296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.975 [2024-07-15 18:34:38.345365] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.234 [2024-07-15 18:34:38.459552] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:09:04.234 A filename is required. 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.234 00:09:04.234 real 0m0.643s 00:09:04.234 user 0m0.412s 00:09:04.234 sys 0m0.170s 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.234 ************************************ 00:09:04.234 18:34:38 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:04.234 END TEST accel_missing_filename 00:09:04.234 ************************************ 00:09:04.234 18:34:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:04.234 18:34:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:04.234 18:34:38 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:04.234 18:34:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.234 18:34:38 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.234 ************************************ 00:09:04.234 START TEST accel_compress_verify 00:09:04.234 ************************************ 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.234 18:34:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:04.234 18:34:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:04.234 [2024-07-15 18:34:38.666722] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:04.234 [2024-07-15 18:34:38.666826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63803 ] 00:09:04.547 [2024-07-15 18:34:38.816156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.547 [2024-07-15 18:34:38.969809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.805 [2024-07-15 18:34:39.049718] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.805 [2024-07-15 18:34:39.163569] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:09:04.805 00:09:04.805 Compression does not support the verify option, aborting. 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.806 00:09:04.806 real 0m0.650s 00:09:04.806 user 0m0.418s 00:09:04.806 sys 0m0.165s 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.806 18:34:39 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:04.806 ************************************ 00:09:04.806 END TEST accel_compress_verify 00:09:04.806 ************************************ 00:09:05.065 18:34:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:05.065 18:34:39 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:05.065 18:34:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:05.065 18:34:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.065 18:34:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:05.065 ************************************ 00:09:05.065 START TEST accel_wrong_workload 00:09:05.065 ************************************ 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:05.065 18:34:39 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:05.065 Unsupported workload type: foobar 00:09:05.065 [2024-07-15 18:34:39.366610] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:05.065 accel_perf options: 00:09:05.065 [-h help message] 00:09:05.065 [-q queue depth per core] 00:09:05.065 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:05.065 [-T number of threads per core 00:09:05.065 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:05.065 [-t time in seconds] 00:09:05.065 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:05.065 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:05.065 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:05.065 [-l for compress/decompress workloads, name of uncompressed input file 00:09:05.065 [-S for crc32c workload, use this seed value (default 0) 00:09:05.065 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:05.065 [-f for fill workload, use this BYTE value (default 255) 00:09:05.065 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:05.065 [-y verify result if this switch is on] 00:09:05.065 [-a tasks to allocate per core (default: same value as -q)] 00:09:05.065 Can be used to spread operations across a wider range of memory. 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.065 00:09:05.065 real 0m0.033s 00:09:05.065 user 0m0.018s 00:09:05.065 sys 0m0.014s 00:09:05.065 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.066 18:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 ************************************ 00:09:05.066 END TEST accel_wrong_workload 00:09:05.066 ************************************ 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:05.066 18:34:39 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 ************************************ 00:09:05.066 START TEST accel_negative_buffers 00:09:05.066 ************************************ 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:05.066 18:34:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:05.066 -x option must be non-negative. 00:09:05.066 [2024-07-15 18:34:39.454109] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:05.066 accel_perf options: 00:09:05.066 [-h help message] 00:09:05.066 [-q queue depth per core] 00:09:05.066 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:05.066 [-T number of threads per core 00:09:05.066 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:05.066 [-t time in seconds] 00:09:05.066 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:05.066 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:05.066 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:05.066 [-l for compress/decompress workloads, name of uncompressed input file 00:09:05.066 [-S for crc32c workload, use this seed value (default 0) 00:09:05.066 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:05.066 [-f for fill workload, use this BYTE value (default 255) 00:09:05.066 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:05.066 [-y verify result if this switch is on] 00:09:05.066 [-a tasks to allocate per core (default: same value as -q)] 00:09:05.066 Can be used to spread operations across a wider range of memory. 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.066 00:09:05.066 real 0m0.035s 00:09:05.066 user 0m0.017s 00:09:05.066 sys 0m0.018s 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.066 ************************************ 00:09:05.066 END TEST accel_negative_buffers 00:09:05.066 ************************************ 00:09:05.066 18:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:05.066 18:34:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.066 18:34:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 ************************************ 00:09:05.066 START TEST accel_crc32c 00:09:05.066 ************************************ 00:09:05.066 18:34:39 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:05.066 18:34:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:05.066 [2024-07-15 18:34:39.543295] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:05.066 [2024-07-15 18:34:39.543386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63861 ] 00:09:05.326 [2024-07-15 18:34:39.678194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.585 [2024-07-15 18:34:39.829957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:05.585 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:05.586 18:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:06.965 18:34:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:06.965 00:09:06.965 real 0m1.644s 00:09:06.965 user 0m1.398s 00:09:06.965 sys 0m0.152s 00:09:06.965 18:34:41 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.965 18:34:41 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:06.965 ************************************ 00:09:06.965 END TEST accel_crc32c 00:09:06.965 ************************************ 00:09:06.965 18:34:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:06.965 18:34:41 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:06.965 18:34:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:06.965 18:34:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.965 18:34:41 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.965 ************************************ 00:09:06.965 START TEST accel_crc32c_C2 00:09:06.965 ************************************ 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:06.965 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:06.965 [2024-07-15 18:34:41.244607] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:06.965 [2024-07-15 18:34:41.244713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63896 ] 00:09:06.965 [2024-07-15 18:34:41.388695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.225 [2024-07-15 18:34:41.543202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.225 18:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:08.611 00:09:08.611 real 0m1.662s 00:09:08.611 user 0m1.401s 00:09:08.611 sys 0m0.168s 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.611 18:34:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:08.611 ************************************ 00:09:08.611 END TEST accel_crc32c_C2 00:09:08.611 ************************************ 00:09:08.611 18:34:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:08.611 18:34:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:08.611 18:34:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:08.611 18:34:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.611 18:34:42 accel -- common/autotest_common.sh@10 -- # set +x 00:09:08.611 ************************************ 00:09:08.611 START TEST accel_copy 00:09:08.611 ************************************ 00:09:08.611 18:34:42 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:09:08.611 18:34:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:08.611 18:34:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:08.611 18:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.611 18:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:08.612 18:34:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:08.612 [2024-07-15 18:34:42.960561] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:08.612 [2024-07-15 18:34:42.960659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63936 ] 00:09:08.871 [2024-07-15 18:34:43.104361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.871 [2024-07-15 18:34:43.256179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:08.871 18:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:10.245 18:34:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:10.245 00:09:10.245 real 0m1.648s 00:09:10.245 user 0m1.383s 00:09:10.245 sys 0m0.172s 00:09:10.245 18:34:44 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.245 ************************************ 00:09:10.245 18:34:44 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:10.245 END TEST accel_copy 00:09:10.245 ************************************ 00:09:10.245 18:34:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:10.245 18:34:44 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.245 18:34:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:10.245 18:34:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.245 18:34:44 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.245 ************************************ 00:09:10.245 START TEST accel_fill 00:09:10.245 ************************************ 00:09:10.245 18:34:44 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:10.245 18:34:44 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:10.245 [2024-07-15 18:34:44.659040] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:10.245 [2024-07-15 18:34:44.659154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63965 ] 00:09:10.504 [2024-07-15 18:34:44.797175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.504 [2024-07-15 18:34:44.951613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.763 18:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:12.141 18:34:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:12.142 00:09:12.142 real 0m1.645s 00:09:12.142 user 0m0.015s 00:09:12.142 sys 0m0.003s 00:09:12.142 18:34:46 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.142 18:34:46 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:12.142 ************************************ 00:09:12.142 END TEST accel_fill 00:09:12.142 ************************************ 00:09:12.142 18:34:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:12.142 18:34:46 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:12.142 18:34:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:12.142 18:34:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.142 18:34:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:12.142 ************************************ 00:09:12.142 START TEST accel_copy_crc32c 00:09:12.142 ************************************ 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:12.142 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:12.142 [2024-07-15 18:34:46.361030] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:12.142 [2024-07-15 18:34:46.361125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64005 ] 00:09:12.142 [2024-07-15 18:34:46.501732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.401 [2024-07-15 18:34:46.657204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.401 18:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.779 00:09:13.779 real 0m1.650s 00:09:13.779 user 0m1.391s 00:09:13.779 sys 0m0.164s 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.779 ************************************ 00:09:13.779 END TEST accel_copy_crc32c 00:09:13.779 18:34:47 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:13.779 ************************************ 00:09:13.779 18:34:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:13.779 18:34:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:13.779 18:34:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:13.779 18:34:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.779 18:34:48 accel -- common/autotest_common.sh@10 -- # set +x 00:09:13.779 ************************************ 00:09:13.779 START TEST accel_copy_crc32c_C2 00:09:13.779 ************************************ 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:13.779 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:13.779 [2024-07-15 18:34:48.066526] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:13.779 [2024-07-15 18:34:48.066612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64047 ] 00:09:13.779 [2024-07-15 18:34:48.201272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.038 [2024-07-15 18:34:48.350749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:14.038 18:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:15.414 00:09:15.414 real 0m1.637s 00:09:15.414 user 0m1.388s 00:09:15.414 sys 0m0.157s 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.414 18:34:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:15.414 ************************************ 00:09:15.414 END TEST accel_copy_crc32c_C2 00:09:15.414 ************************************ 00:09:15.414 18:34:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:15.414 18:34:49 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:15.414 18:34:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:15.414 18:34:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.414 18:34:49 accel -- common/autotest_common.sh@10 -- # set +x 00:09:15.414 ************************************ 00:09:15.414 START TEST accel_dualcast 00:09:15.414 ************************************ 00:09:15.414 18:34:49 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:15.414 18:34:49 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:15.414 [2024-07-15 18:34:49.756881] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:15.414 [2024-07-15 18:34:49.756994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64076 ] 00:09:15.673 [2024-07-15 18:34:49.900826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.673 [2024-07-15 18:34:50.059725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:15.673 18:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:17.049 18:34:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:17.049 00:09:17.049 real 0m1.657s 00:09:17.049 user 0m1.397s 00:09:17.049 sys 0m0.166s 00:09:17.049 18:34:51 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.049 ************************************ 00:09:17.049 END TEST accel_dualcast 00:09:17.049 ************************************ 00:09:17.049 18:34:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:17.049 18:34:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:17.049 18:34:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:17.049 18:34:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:17.049 18:34:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.049 18:34:51 accel -- common/autotest_common.sh@10 -- # set +x 00:09:17.049 ************************************ 00:09:17.049 START TEST accel_compare 00:09:17.049 ************************************ 00:09:17.049 18:34:51 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:17.049 18:34:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:17.049 [2024-07-15 18:34:51.477249] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:17.049 [2024-07-15 18:34:51.477343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64116 ] 00:09:17.308 [2024-07-15 18:34:51.619301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.308 [2024-07-15 18:34:51.773368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.567 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:17.568 18:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:18.953 18:34:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:18.953 00:09:18.953 real 0m1.654s 00:09:18.953 user 0m1.404s 00:09:18.953 sys 0m0.160s 00:09:18.953 18:34:53 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.953 ************************************ 00:09:18.953 END TEST accel_compare 00:09:18.953 ************************************ 00:09:18.953 18:34:53 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:18.953 18:34:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:18.953 18:34:53 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:18.953 18:34:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:18.953 18:34:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.953 18:34:53 accel -- common/autotest_common.sh@10 -- # set +x 00:09:18.953 ************************************ 00:09:18.953 START TEST accel_xor 00:09:18.953 ************************************ 00:09:18.953 18:34:53 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:18.953 18:34:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:18.953 [2024-07-15 18:34:53.182591] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:18.953 [2024-07-15 18:34:53.182720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64151 ] 00:09:18.953 [2024-07-15 18:34:53.328894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.217 [2024-07-15 18:34:53.485432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.217 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.218 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.218 18:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.218 18:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.218 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.218 18:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:20.592 00:09:20.592 real 0m1.677s 00:09:20.592 user 0m1.400s 00:09:20.592 sys 0m0.181s 00:09:20.592 18:34:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.592 18:34:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:20.592 ************************************ 00:09:20.592 END TEST accel_xor 00:09:20.592 ************************************ 00:09:20.592 18:34:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:20.592 18:34:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:20.592 18:34:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:20.592 18:34:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.592 18:34:54 accel -- common/autotest_common.sh@10 -- # set +x 00:09:20.592 ************************************ 00:09:20.592 START TEST accel_xor 00:09:20.592 ************************************ 00:09:20.592 18:34:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:20.592 18:34:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:20.592 [2024-07-15 18:34:54.916566] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:20.592 [2024-07-15 18:34:54.916774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64185 ] 00:09:20.592 [2024-07-15 18:34:55.054740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.851 [2024-07-15 18:34:55.211639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.851 18:34:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:22.239 ************************************ 00:09:22.239 END TEST accel_xor 00:09:22.239 ************************************ 00:09:22.239 18:34:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:22.239 00:09:22.239 real 0m1.659s 00:09:22.239 user 0m1.394s 00:09:22.239 sys 0m0.167s 00:09:22.239 18:34:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.239 18:34:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:22.239 18:34:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:22.239 18:34:56 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:22.239 18:34:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:22.239 18:34:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.239 18:34:56 accel -- common/autotest_common.sh@10 -- # set +x 00:09:22.239 ************************************ 00:09:22.239 START TEST accel_dif_verify 00:09:22.239 ************************************ 00:09:22.239 18:34:56 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:22.239 18:34:56 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:22.239 [2024-07-15 18:34:56.625963] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:22.239 [2024-07-15 18:34:56.626099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64226 ] 00:09:22.497 [2024-07-15 18:34:56.766402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.497 [2024-07-15 18:34:56.925039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.755 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.756 18:34:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:24.127 18:34:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:24.127 00:09:24.127 real 0m1.653s 00:09:24.127 user 0m0.016s 00:09:24.127 sys 0m0.004s 00:09:24.127 ************************************ 00:09:24.127 END TEST accel_dif_verify 00:09:24.127 ************************************ 00:09:24.127 18:34:58 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.127 18:34:58 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:24.127 18:34:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:24.127 18:34:58 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:24.127 18:34:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:24.127 18:34:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.127 18:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.127 ************************************ 00:09:24.127 START TEST accel_dif_generate 00:09:24.127 ************************************ 00:09:24.127 18:34:58 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:09:24.127 18:34:58 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:24.127 18:34:58 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:24.127 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.127 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.127 18:34:58 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:24.127 18:34:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:24.127 18:34:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:24.128 18:34:58 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.128 18:34:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.128 18:34:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.128 18:34:58 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.128 18:34:58 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.128 18:34:58 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:24.128 18:34:58 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:24.128 [2024-07-15 18:34:58.339272] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:24.128 [2024-07-15 18:34:58.339359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64261 ] 00:09:24.128 [2024-07-15 18:34:58.477869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.385 [2024-07-15 18:34:58.634155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:24.385 18:34:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:25.777 ************************************ 00:09:25.777 END TEST accel_dif_generate 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:25.777 18:34:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:25.777 00:09:25.777 real 0m1.653s 00:09:25.777 user 0m1.405s 00:09:25.777 sys 0m0.152s 00:09:25.777 18:34:59 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.777 18:34:59 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:25.777 ************************************ 00:09:25.777 18:35:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:25.777 18:35:00 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:25.777 18:35:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:25.777 18:35:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.777 18:35:00 accel -- common/autotest_common.sh@10 -- # set +x 00:09:25.777 ************************************ 00:09:25.777 START TEST accel_dif_generate_copy 00:09:25.777 ************************************ 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:25.777 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:25.777 [2024-07-15 18:35:00.051055] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:25.778 [2024-07-15 18:35:00.051169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64295 ] 00:09:25.778 [2024-07-15 18:35:00.190456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.034 [2024-07-15 18:35:00.347563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.034 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:26.035 18:35:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 ************************************ 00:09:27.407 END TEST accel_dif_generate_copy 00:09:27.407 ************************************ 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:27.407 00:09:27.407 real 0m1.651s 00:09:27.407 user 0m1.387s 00:09:27.407 sys 0m0.165s 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.407 18:35:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:27.407 18:35:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:27.407 18:35:01 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:27.407 18:35:01 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:27.407 18:35:01 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:27.407 18:35:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.407 18:35:01 accel -- common/autotest_common.sh@10 -- # set +x 00:09:27.407 ************************************ 00:09:27.407 START TEST accel_comp 00:09:27.407 ************************************ 00:09:27.407 18:35:01 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:27.407 18:35:01 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:27.407 [2024-07-15 18:35:01.764964] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:27.407 [2024-07-15 18:35:01.765128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64335 ] 00:09:27.667 [2024-07-15 18:35:01.901173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.667 [2024-07-15 18:35:02.056555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.667 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:27.928 18:35:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:29.303 18:35:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.303 00:09:29.303 real 0m1.652s 00:09:29.303 user 0m0.019s 00:09:29.303 sys 0m0.002s 00:09:29.303 ************************************ 00:09:29.303 END TEST accel_comp 00:09:29.303 ************************************ 00:09:29.303 18:35:03 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.303 18:35:03 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:09:29.303 18:35:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:29.303 18:35:03 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:29.303 18:35:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:29.303 18:35:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.303 18:35:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:29.303 ************************************ 00:09:29.303 START TEST accel_decomp 00:09:29.303 ************************************ 00:09:29.303 18:35:03 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:09:29.303 18:35:03 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:09:29.303 [2024-07-15 18:35:03.471556] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:29.304 [2024-07-15 18:35:03.471650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64370 ] 00:09:29.304 [2024-07-15 18:35:03.610355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.304 [2024-07-15 18:35:03.760327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:29.563 18:35:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:30.939 18:35:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:30.939 00:09:30.939 real 0m1.652s 00:09:30.939 user 0m1.391s 00:09:30.939 sys 0m0.169s 00:09:30.939 18:35:05 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.939 ************************************ 00:09:30.939 END TEST accel_decomp 00:09:30.939 ************************************ 00:09:30.939 18:35:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:09:30.939 18:35:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:30.939 18:35:05 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:30.939 18:35:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:30.939 18:35:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.939 18:35:05 accel -- common/autotest_common.sh@10 -- # set +x 00:09:30.939 ************************************ 00:09:30.939 START TEST accel_decomp_full 00:09:30.939 ************************************ 00:09:30.939 18:35:05 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:30.939 18:35:05 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:09:30.939 18:35:05 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:09:30.939 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:30.939 18:35:05 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:30.939 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:30.939 18:35:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:09:30.940 18:35:05 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:09:30.940 [2024-07-15 18:35:05.183188] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:30.940 [2024-07-15 18:35:05.183274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64405 ] 00:09:30.940 [2024-07-15 18:35:05.327972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.198 [2024-07-15 18:35:05.508741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:31.198 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:31.199 18:35:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:32.576 18:35:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:32.576 00:09:32.576 real 0m1.693s 00:09:32.576 user 0m1.430s 00:09:32.576 sys 0m0.174s 00:09:32.576 ************************************ 00:09:32.576 END TEST accel_decomp_full 00:09:32.576 ************************************ 00:09:32.576 18:35:06 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.576 18:35:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:09:32.576 18:35:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:32.576 18:35:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:32.576 18:35:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:32.576 18:35:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.576 18:35:06 accel -- common/autotest_common.sh@10 -- # set +x 00:09:32.576 ************************************ 00:09:32.576 START TEST accel_decomp_mcore 00:09:32.576 ************************************ 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:32.576 18:35:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:32.576 [2024-07-15 18:35:06.935663] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:32.576 [2024-07-15 18:35:06.935765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64443 ] 00:09:32.835 [2024-07-15 18:35:07.076968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.835 [2024-07-15 18:35:07.242399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.835 [2024-07-15 18:35:07.242522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.835 [2024-07-15 18:35:07.244031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.835 [2024-07-15 18:35:07.244055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.093 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:33.094 18:35:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:34.518 00:09:34.518 real 0m1.704s 00:09:34.518 user 0m0.020s 00:09:34.518 sys 0m0.002s 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.518 18:35:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:34.518 ************************************ 00:09:34.518 END TEST accel_decomp_mcore 00:09:34.518 ************************************ 00:09:34.518 18:35:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:34.518 18:35:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:34.518 18:35:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:34.518 18:35:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.518 18:35:08 accel -- common/autotest_common.sh@10 -- # set +x 00:09:34.518 ************************************ 00:09:34.518 START TEST accel_decomp_full_mcore 00:09:34.518 ************************************ 00:09:34.518 18:35:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:34.518 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:34.519 18:35:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:34.519 [2024-07-15 18:35:08.701096] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:34.519 [2024-07-15 18:35:08.701205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64482 ] 00:09:34.519 [2024-07-15 18:35:08.847683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.778 [2024-07-15 18:35:09.022324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.778 [2024-07-15 18:35:09.022427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.778 [2024-07-15 18:35:09.022614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.778 [2024-07-15 18:35:09.022617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 18:35:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:36.155 00:09:36.155 real 0m1.720s 00:09:36.155 user 0m5.080s 00:09:36.155 sys 0m0.199s 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.155 18:35:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:36.155 ************************************ 00:09:36.155 END TEST accel_decomp_full_mcore 00:09:36.155 ************************************ 00:09:36.155 18:35:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:36.155 18:35:10 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:36.155 18:35:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:36.155 18:35:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.155 18:35:10 accel -- common/autotest_common.sh@10 -- # set +x 00:09:36.155 ************************************ 00:09:36.155 START TEST accel_decomp_mthread 00:09:36.155 ************************************ 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:36.155 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:36.155 [2024-07-15 18:35:10.462273] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:36.155 [2024-07-15 18:35:10.462387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64525 ] 00:09:36.155 [2024-07-15 18:35:10.601250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.416 [2024-07-15 18:35:10.759727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.416 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:36.417 18:35:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.884 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:37.885 00:09:37.885 real 0m1.657s 00:09:37.885 user 0m1.396s 00:09:37.885 sys 0m0.168s 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.885 18:35:12 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:37.885 ************************************ 00:09:37.885 END TEST accel_decomp_mthread 00:09:37.885 ************************************ 00:09:37.885 18:35:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:37.885 18:35:12 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:37.885 18:35:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:37.885 18:35:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.885 18:35:12 accel -- common/autotest_common.sh@10 -- # set +x 00:09:37.885 ************************************ 00:09:37.885 START TEST accel_decomp_full_mthread 00:09:37.885 ************************************ 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:37.885 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:37.885 [2024-07-15 18:35:12.190925] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:37.885 [2024-07-15 18:35:12.191059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64554 ] 00:09:37.885 [2024-07-15 18:35:12.343371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.143 [2024-07-15 18:35:12.512385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:38.143 18:35:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.518 00:09:39.518 real 0m1.740s 00:09:39.518 user 0m1.462s 00:09:39.518 sys 0m0.180s 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.518 18:35:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:39.518 ************************************ 00:09:39.518 END TEST accel_decomp_full_mthread 00:09:39.518 ************************************ 00:09:39.518 18:35:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:39.518 18:35:13 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:39.518 18:35:13 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:39.518 18:35:13 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:39.518 18:35:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:39.518 18:35:13 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:39.518 18:35:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:39.518 18:35:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.518 18:35:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.518 18:35:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.518 18:35:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:39.518 18:35:13 accel -- common/autotest_common.sh@10 -- # set +x 00:09:39.518 18:35:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:39.518 18:35:13 accel -- accel/accel.sh@41 -- # jq -r . 00:09:39.518 ************************************ 00:09:39.518 START TEST accel_dif_functional_tests 00:09:39.518 ************************************ 00:09:39.518 18:35:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:39.776 [2024-07-15 18:35:14.017821] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:39.776 [2024-07-15 18:35:14.017974] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64595 ] 00:09:39.776 [2024-07-15 18:35:14.164538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.034 [2024-07-15 18:35:14.321518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.034 [2024-07-15 18:35:14.321638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.035 [2024-07-15 18:35:14.321638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.035 00:09:40.035 00:09:40.035 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.035 http://cunit.sourceforge.net/ 00:09:40.035 00:09:40.035 00:09:40.035 Suite: accel_dif 00:09:40.035 Test: verify: DIF generated, GUARD check ...passed 00:09:40.035 Test: verify: DIF generated, APPTAG check ...passed 00:09:40.035 Test: verify: DIF generated, REFTAG check ...passed 00:09:40.035 Test: verify: DIF not generated, GUARD check ...passed 00:09:40.035 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 18:35:14.456986] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:40.035 [2024-07-15 18:35:14.457120] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:40.035 passed 00:09:40.035 Test: verify: DIF not generated, REFTAG check ...passed 00:09:40.035 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:40.035 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 18:35:14.457241] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:40.035 [2024-07-15 18:35:14.457326] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:40.035 passed 00:09:40.035 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:40.035 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:40.035 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:40.035 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:40.035 Test: verify copy: DIF generated, GUARD check ...[2024-07-15 18:35:14.457588] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:40.035 passed 00:09:40.035 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:40.035 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:40.035 Test: verify copy: DIF not generated, GUARD check ...passed 00:09:40.035 Test: verify copy: DIF not generated, APPTAG check ...passed 00:09:40.035 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 18:35:14.457890] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:40.035 [2024-07-15 18:35:14.457975] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:40.035 [2024-07-15 18:35:14.458025] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:40.035 passed 00:09:40.035 Test: generate copy: DIF generated, GUARD check ...passed 00:09:40.035 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:40.035 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:40.035 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:40.035 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:40.035 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:40.035 Test: generate copy: iovecs-len validate ...passed 00:09:40.035 Test: generate copy: buffer alignment validate ...passed 00:09:40.035 00:09:40.035 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.035 suites 1 1 n/a 0 0 00:09:40.035 tests 26 26 26 0 0 00:09:40.035 asserts 115 115 115 0 n/a 00:09:40.035 00:09:40.035 Elapsed time = 0.005 seconds 00:09:40.035 [2024-07-15 18:35:14.458481] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:40.601 00:09:40.601 real 0m0.828s 00:09:40.601 user 0m1.171s 00:09:40.601 sys 0m0.233s 00:09:40.601 18:35:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.601 18:35:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:40.601 ************************************ 00:09:40.601 END TEST accel_dif_functional_tests 00:09:40.601 ************************************ 00:09:40.601 18:35:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:40.601 00:09:40.601 real 0m38.849s 00:09:40.601 user 0m39.854s 00:09:40.601 sys 0m5.494s 00:09:40.601 18:35:14 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.601 18:35:14 accel -- common/autotest_common.sh@10 -- # set +x 00:09:40.601 ************************************ 00:09:40.601 END TEST accel 00:09:40.601 ************************************ 00:09:40.601 18:35:14 -- common/autotest_common.sh@1142 -- # return 0 00:09:40.601 18:35:14 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:40.601 18:35:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:40.601 18:35:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.601 18:35:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.601 ************************************ 00:09:40.601 START TEST accel_rpc 00:09:40.601 ************************************ 00:09:40.601 18:35:14 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:40.601 * Looking for test storage... 00:09:40.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:40.601 18:35:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:40.601 18:35:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64665 00:09:40.601 18:35:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64665 00:09:40.601 18:35:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:40.601 18:35:14 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64665 ']' 00:09:40.601 18:35:14 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.601 18:35:14 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.601 18:35:14 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.601 18:35:14 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.601 18:35:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.601 [2024-07-15 18:35:15.057095] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:40.601 [2024-07-15 18:35:15.057193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64665 ] 00:09:40.859 [2024-07-15 18:35:15.204191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.117 [2024-07-15 18:35:15.374914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.683 18:35:16 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.683 18:35:16 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:41.683 18:35:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:41.683 18:35:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:41.683 18:35:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:41.683 18:35:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:41.683 18:35:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:41.683 18:35:16 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:41.683 18:35:16 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.683 18:35:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.683 ************************************ 00:09:41.683 START TEST accel_assign_opcode 00:09:41.683 ************************************ 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:41.683 [2024-07-15 18:35:16.135642] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:41.683 [2024-07-15 18:35:16.143626] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.683 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.249 software 00:09:42.249 ************************************ 00:09:42.249 END TEST accel_assign_opcode 00:09:42.249 ************************************ 00:09:42.249 00:09:42.249 real 0m0.386s 00:09:42.249 user 0m0.033s 00:09:42.249 sys 0m0.013s 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.249 18:35:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:42.249 18:35:16 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64665 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64665 ']' 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64665 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64665 00:09:42.249 killing process with pid 64665 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64665' 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@967 -- # kill 64665 00:09:42.249 18:35:16 accel_rpc -- common/autotest_common.sh@972 -- # wait 64665 00:09:42.816 00:09:42.816 real 0m2.290s 00:09:42.816 user 0m2.307s 00:09:42.816 sys 0m0.614s 00:09:42.816 18:35:17 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.816 ************************************ 00:09:42.816 END TEST accel_rpc 00:09:42.816 18:35:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.816 ************************************ 00:09:42.816 18:35:17 -- common/autotest_common.sh@1142 -- # return 0 00:09:42.816 18:35:17 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:42.816 18:35:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:42.816 18:35:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.816 18:35:17 -- common/autotest_common.sh@10 -- # set +x 00:09:42.816 ************************************ 00:09:42.816 START TEST app_cmdline 00:09:42.816 ************************************ 00:09:42.816 18:35:17 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:43.073 * Looking for test storage... 00:09:43.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:43.073 18:35:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:43.073 18:35:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64776 00:09:43.073 18:35:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64776 00:09:43.073 18:35:17 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64776 ']' 00:09:43.073 18:35:17 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.073 18:35:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:43.073 18:35:17 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.073 18:35:17 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.074 18:35:17 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.074 18:35:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:43.074 [2024-07-15 18:35:17.408766] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:43.074 [2024-07-15 18:35:17.408888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64776 ] 00:09:43.074 [2024-07-15 18:35:17.555459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.332 [2024-07-15 18:35:17.729182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:44.266 { 00:09:44.266 "fields": { 00:09:44.266 "commit": "f604975ba", 00:09:44.266 "major": 24, 00:09:44.266 "minor": 9, 00:09:44.266 "patch": 0, 00:09:44.266 "suffix": "-pre" 00:09:44.266 }, 00:09:44.266 "version": "SPDK v24.09-pre git sha1 f604975ba" 00:09:44.266 } 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:44.266 18:35:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:44.266 18:35:18 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.525 2024/07/15 18:35:18 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:09:44.525 request: 00:09:44.525 { 00:09:44.525 "method": "env_dpdk_get_mem_stats", 00:09:44.525 "params": {} 00:09:44.525 } 00:09:44.525 Got JSON-RPC error response 00:09:44.525 GoRPCClient: error on JSON-RPC call 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.525 18:35:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64776 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64776 ']' 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64776 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64776 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:44.525 killing process with pid 64776 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:44.525 18:35:18 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64776' 00:09:44.526 18:35:18 app_cmdline -- common/autotest_common.sh@967 -- # kill 64776 00:09:44.526 18:35:18 app_cmdline -- common/autotest_common.sh@972 -- # wait 64776 00:09:45.460 00:09:45.460 real 0m2.347s 00:09:45.460 user 0m2.704s 00:09:45.460 sys 0m0.669s 00:09:45.460 18:35:19 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.460 ************************************ 00:09:45.460 END TEST app_cmdline 00:09:45.460 ************************************ 00:09:45.460 18:35:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:45.460 18:35:19 -- common/autotest_common.sh@1142 -- # return 0 00:09:45.460 18:35:19 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:45.460 18:35:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:45.460 18:35:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.460 18:35:19 -- common/autotest_common.sh@10 -- # set +x 00:09:45.460 ************************************ 00:09:45.460 START TEST version 00:09:45.460 ************************************ 00:09:45.460 18:35:19 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:45.460 * Looking for test storage... 00:09:45.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:45.460 18:35:19 version -- app/version.sh@17 -- # get_header_version major 00:09:45.460 18:35:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # cut -f2 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.460 18:35:19 version -- app/version.sh@17 -- # major=24 00:09:45.460 18:35:19 version -- app/version.sh@18 -- # get_header_version minor 00:09:45.460 18:35:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # cut -f2 00:09:45.460 18:35:19 version -- app/version.sh@18 -- # minor=9 00:09:45.460 18:35:19 version -- app/version.sh@19 -- # get_header_version patch 00:09:45.460 18:35:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # cut -f2 00:09:45.460 18:35:19 version -- app/version.sh@19 -- # patch=0 00:09:45.460 18:35:19 version -- app/version.sh@20 -- # get_header_version suffix 00:09:45.460 18:35:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # cut -f2 00:09:45.460 18:35:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.460 18:35:19 version -- app/version.sh@20 -- # suffix=-pre 00:09:45.460 18:35:19 version -- app/version.sh@22 -- # version=24.9 00:09:45.460 18:35:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:45.461 18:35:19 version -- app/version.sh@28 -- # version=24.9rc0 00:09:45.461 18:35:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:45.461 18:35:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:45.461 18:35:19 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:45.461 18:35:19 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:45.461 00:09:45.461 real 0m0.178s 00:09:45.461 user 0m0.094s 00:09:45.461 sys 0m0.122s 00:09:45.461 18:35:19 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.461 18:35:19 version -- common/autotest_common.sh@10 -- # set +x 00:09:45.461 ************************************ 00:09:45.461 END TEST version 00:09:45.461 ************************************ 00:09:45.461 18:35:19 -- common/autotest_common.sh@1142 -- # return 0 00:09:45.461 18:35:19 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@198 -- # uname -s 00:09:45.461 18:35:19 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:45.461 18:35:19 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:45.461 18:35:19 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:45.461 18:35:19 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:45.461 18:35:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:45.461 18:35:19 -- common/autotest_common.sh@10 -- # set +x 00:09:45.461 18:35:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:09:45.461 18:35:19 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:09:45.461 18:35:19 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:45.461 18:35:19 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:45.461 18:35:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.461 18:35:19 -- common/autotest_common.sh@10 -- # set +x 00:09:45.461 ************************************ 00:09:45.461 START TEST nvmf_tcp 00:09:45.461 ************************************ 00:09:45.461 18:35:19 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:45.720 * Looking for test storage... 00:09:45.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.720 18:35:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.720 18:35:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.720 18:35:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.720 18:35:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 18:35:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 18:35:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 18:35:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:09:45.720 18:35:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:45.720 18:35:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.720 18:35:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:09:45.720 18:35:20 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.720 18:35:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:45.720 18:35:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.720 18:35:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.720 ************************************ 00:09:45.720 START TEST nvmf_example 00:09:45.720 ************************************ 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.720 * Looking for test storage... 00:09:45.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.720 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:45.721 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.979 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:45.980 Cannot find device "nvmf_init_br" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:45.980 Cannot find device "nvmf_tgt_br" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.980 Cannot find device "nvmf_tgt_br2" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:45.980 Cannot find device "nvmf_init_br" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:45.980 Cannot find device "nvmf_tgt_br" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:45.980 Cannot find device "nvmf_tgt_br2" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:45.980 Cannot find device "nvmf_br" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:45.980 Cannot find device "nvmf_init_if" 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:45.980 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:46.285 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:46.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:09:46.286 00:09:46.286 --- 10.0.0.2 ping statistics --- 00:09:46.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.286 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:46.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:46.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:46.286 00:09:46.286 --- 10.0.0.3 ping statistics --- 00:09:46.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.286 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:46.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:09:46.286 00:09:46.286 --- 10.0.0.1 ping statistics --- 00:09:46.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.286 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=65138 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 65138 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 65138 ']' 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.286 18:35:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.732 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:09:47.733 18:35:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:57.705 Initializing NVMe Controllers 00:09:57.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:57.705 Initialization complete. Launching workers. 00:09:57.705 ======================================================== 00:09:57.705 Latency(us) 00:09:57.705 Device Information : IOPS MiB/s Average min max 00:09:57.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16021.05 62.58 3994.78 658.07 22257.53 00:09:57.705 ======================================================== 00:09:57.705 Total : 16021.05 62.58 3994.78 658.07 22257.53 00:09:57.705 00:09:57.962 18:35:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:57.962 18:35:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:57.962 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.962 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:57.962 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.963 rmmod nvme_tcp 00:09:57.963 rmmod nvme_fabrics 00:09:57.963 rmmod nvme_keyring 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 65138 ']' 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 65138 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 65138 ']' 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 65138 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65138 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65138' 00:09:57.963 killing process with pid 65138 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 65138 00:09:57.963 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 65138 00:09:58.220 nvmf threads initialize successfully 00:09:58.220 bdev subsystem init successfully 00:09:58.220 created a nvmf target service 00:09:58.220 create targets's poll groups done 00:09:58.220 all subsystems of target started 00:09:58.221 nvmf target is running 00:09:58.221 all subsystems of target stopped 00:09:58.221 destroy targets's poll groups done 00:09:58.221 destroyed the nvmf target service 00:09:58.221 bdev subsystem finish successfully 00:09:58.221 nvmf threads destroy successfully 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.221 00:09:58.221 real 0m12.623s 00:09:58.221 user 0m44.548s 00:09:58.221 sys 0m2.429s 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.221 ************************************ 00:09:58.221 END TEST nvmf_example 00:09:58.221 ************************************ 00:09:58.221 18:35:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.481 18:35:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:58.481 18:35:32 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:58.481 18:35:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:58.481 18:35:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.481 18:35:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.481 ************************************ 00:09:58.481 START TEST nvmf_filesystem 00:09:58.481 ************************************ 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:58.481 * Looking for test storage... 00:09:58.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:58.481 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:58.482 #define SPDK_CONFIG_H 00:09:58.482 #define SPDK_CONFIG_APPS 1 00:09:58.482 #define SPDK_CONFIG_ARCH native 00:09:58.482 #undef SPDK_CONFIG_ASAN 00:09:58.482 #define SPDK_CONFIG_AVAHI 1 00:09:58.482 #undef SPDK_CONFIG_CET 00:09:58.482 #define SPDK_CONFIG_COVERAGE 1 00:09:58.482 #define SPDK_CONFIG_CROSS_PREFIX 00:09:58.482 #undef SPDK_CONFIG_CRYPTO 00:09:58.482 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:58.482 #undef SPDK_CONFIG_CUSTOMOCF 00:09:58.482 #undef SPDK_CONFIG_DAOS 00:09:58.482 #define SPDK_CONFIG_DAOS_DIR 00:09:58.482 #define SPDK_CONFIG_DEBUG 1 00:09:58.482 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:58.482 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:58.482 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:58.482 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:58.482 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:58.482 #undef SPDK_CONFIG_DPDK_UADK 00:09:58.482 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:58.482 #define SPDK_CONFIG_EXAMPLES 1 00:09:58.482 #undef SPDK_CONFIG_FC 00:09:58.482 #define SPDK_CONFIG_FC_PATH 00:09:58.482 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:58.482 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:58.482 #undef SPDK_CONFIG_FUSE 00:09:58.482 #undef SPDK_CONFIG_FUZZER 00:09:58.482 #define SPDK_CONFIG_FUZZER_LIB 00:09:58.482 #define SPDK_CONFIG_GOLANG 1 00:09:58.482 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:58.482 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:58.482 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:58.482 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:58.482 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:58.482 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:58.482 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:58.482 #define SPDK_CONFIG_IDXD 1 00:09:58.482 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:58.482 #undef SPDK_CONFIG_IPSEC_MB 00:09:58.482 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:58.482 #define SPDK_CONFIG_ISAL 1 00:09:58.482 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:58.482 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:58.482 #define SPDK_CONFIG_LIBDIR 00:09:58.482 #undef SPDK_CONFIG_LTO 00:09:58.482 #define SPDK_CONFIG_MAX_LCORES 128 00:09:58.482 #define SPDK_CONFIG_NVME_CUSE 1 00:09:58.482 #undef SPDK_CONFIG_OCF 00:09:58.482 #define SPDK_CONFIG_OCF_PATH 00:09:58.482 #define SPDK_CONFIG_OPENSSL_PATH 00:09:58.482 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:58.482 #define SPDK_CONFIG_PGO_DIR 00:09:58.482 #undef SPDK_CONFIG_PGO_USE 00:09:58.482 #define SPDK_CONFIG_PREFIX /usr/local 00:09:58.482 #undef SPDK_CONFIG_RAID5F 00:09:58.482 #undef SPDK_CONFIG_RBD 00:09:58.482 #define SPDK_CONFIG_RDMA 1 00:09:58.482 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:58.482 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:58.482 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:58.482 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:58.482 #define SPDK_CONFIG_SHARED 1 00:09:58.482 #undef SPDK_CONFIG_SMA 00:09:58.482 #define SPDK_CONFIG_TESTS 1 00:09:58.482 #undef SPDK_CONFIG_TSAN 00:09:58.482 #define SPDK_CONFIG_UBLK 1 00:09:58.482 #define SPDK_CONFIG_UBSAN 1 00:09:58.482 #undef SPDK_CONFIG_UNIT_TESTS 00:09:58.482 #undef SPDK_CONFIG_URING 00:09:58.482 #define SPDK_CONFIG_URING_PATH 00:09:58.482 #undef SPDK_CONFIG_URING_ZNS 00:09:58.482 #define SPDK_CONFIG_USDT 1 00:09:58.482 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:58.482 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:58.482 #undef SPDK_CONFIG_VFIO_USER 00:09:58.482 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:58.482 #define SPDK_CONFIG_VHOST 1 00:09:58.482 #define SPDK_CONFIG_VIRTIO 1 00:09:58.482 #undef SPDK_CONFIG_VTUNE 00:09:58.482 #define SPDK_CONFIG_VTUNE_DIR 00:09:58.482 #define SPDK_CONFIG_WERROR 1 00:09:58.482 #define SPDK_CONFIG_WPDK_DIR 00:09:58.482 #undef SPDK_CONFIG_XNVME 00:09:58.482 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:58.482 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:58.483 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65384 ]] 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65384 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.fBEoP4 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.fBEoP4/tests/target /tmp/spdk.fBEoP4 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786120192 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244125184 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786120192 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244125184 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:09:58.484 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=90504826880 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9197953024 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:58.485 * Looking for test storage... 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13786120192 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.485 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.744 18:35:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:58.744 Cannot find device "nvmf_tgt_br" 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.744 Cannot find device "nvmf_tgt_br2" 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:58.744 Cannot find device "nvmf_tgt_br" 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:58.744 Cannot find device "nvmf_tgt_br2" 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.744 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:09:58.745 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.745 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.745 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.745 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.745 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.745 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:59.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:59.008 00:09:59.008 --- 10.0.0.2 ping statistics --- 00:09:59.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.008 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:59.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:09:59.008 00:09:59.008 --- 10.0.0.3 ping statistics --- 00:09:59.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.008 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:09:59.008 00:09:59.008 --- 10.0.0.1 ping statistics --- 00:09:59.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.008 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:09:59.008 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.009 ************************************ 00:09:59.009 START TEST nvmf_filesystem_no_in_capsule 00:09:59.009 ************************************ 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65551 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65551 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65551 ']' 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.009 18:35:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.009 [2024-07-15 18:35:33.465680] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:59.009 [2024-07-15 18:35:33.465789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.266 [2024-07-15 18:35:33.615185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.524 [2024-07-15 18:35:33.790582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.524 [2024-07-15 18:35:33.790685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.524 [2024-07-15 18:35:33.790702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.524 [2024-07-15 18:35:33.790715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.524 [2024-07-15 18:35:33.790726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.524 [2024-07-15 18:35:33.790927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.524 [2024-07-15 18:35:33.791110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.524 [2024-07-15 18:35:33.791878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.524 [2024-07-15 18:35:33.791889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.092 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.092 [2024-07-15 18:35:34.561732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.350 Malloc1 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.350 [2024-07-15 18:35:34.821924] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.350 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:00.728 { 00:10:00.728 "aliases": [ 00:10:00.728 "b8efcc56-6c65-4f4c-a0ac-c6fb1c4e1b06" 00:10:00.728 ], 00:10:00.728 "assigned_rate_limits": { 00:10:00.728 "r_mbytes_per_sec": 0, 00:10:00.728 "rw_ios_per_sec": 0, 00:10:00.728 "rw_mbytes_per_sec": 0, 00:10:00.728 "w_mbytes_per_sec": 0 00:10:00.728 }, 00:10:00.728 "block_size": 512, 00:10:00.728 "claim_type": "exclusive_write", 00:10:00.728 "claimed": true, 00:10:00.728 "driver_specific": {}, 00:10:00.728 "memory_domains": [ 00:10:00.728 { 00:10:00.728 "dma_device_id": "system", 00:10:00.728 "dma_device_type": 1 00:10:00.728 }, 00:10:00.728 { 00:10:00.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.728 "dma_device_type": 2 00:10:00.728 } 00:10:00.728 ], 00:10:00.728 "name": "Malloc1", 00:10:00.728 "num_blocks": 1048576, 00:10:00.728 "product_name": "Malloc disk", 00:10:00.728 "supported_io_types": { 00:10:00.728 "abort": true, 00:10:00.728 "compare": false, 00:10:00.728 "compare_and_write": false, 00:10:00.728 "copy": true, 00:10:00.728 "flush": true, 00:10:00.728 "get_zone_info": false, 00:10:00.728 "nvme_admin": false, 00:10:00.728 "nvme_io": false, 00:10:00.728 "nvme_io_md": false, 00:10:00.728 "nvme_iov_md": false, 00:10:00.728 "read": true, 00:10:00.728 "reset": true, 00:10:00.728 "seek_data": false, 00:10:00.728 "seek_hole": false, 00:10:00.728 "unmap": true, 00:10:00.728 "write": true, 00:10:00.728 "write_zeroes": true, 00:10:00.728 "zcopy": true, 00:10:00.728 "zone_append": false, 00:10:00.728 "zone_management": false 00:10:00.728 }, 00:10:00.728 "uuid": "b8efcc56-6c65-4f4c-a0ac-c6fb1c4e1b06", 00:10:00.728 "zoned": false 00:10:00.728 } 00:10:00.728 ]' 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:00.728 18:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.728 18:35:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.728 18:35:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.728 18:35:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.728 18:35:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.728 18:35:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.631 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.631 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.631 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:02.889 18:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.265 ************************************ 00:10:04.265 START TEST filesystem_ext4 00:10:04.265 ************************************ 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:04.265 mke2fs 1.46.5 (30-Dec-2021) 00:10:04.265 Discarding device blocks: 0/522240 done 00:10:04.265 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:04.265 Filesystem UUID: 32439b12-9aad-4888-8f36-2dad00a34830 00:10:04.265 Superblock backups stored on blocks: 00:10:04.265 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:04.265 00:10:04.265 Allocating group tables: 0/64 done 00:10:04.265 Writing inode tables: 0/64 done 00:10:04.265 Creating journal (8192 blocks): done 00:10:04.265 Writing superblocks and filesystem accounting information: 0/64 done 00:10:04.265 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65551 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.265 00:10:04.265 real 0m0.396s 00:10:04.265 user 0m0.027s 00:10:04.265 sys 0m0.067s 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.265 ************************************ 00:10:04.265 END TEST filesystem_ext4 00:10:04.265 ************************************ 00:10:04.265 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.525 ************************************ 00:10:04.525 START TEST filesystem_btrfs 00:10:04.525 ************************************ 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:04.525 btrfs-progs v6.6.2 00:10:04.525 See https://btrfs.readthedocs.io for more information. 00:10:04.525 00:10:04.525 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:04.525 NOTE: several default settings have changed in version 5.15, please make sure 00:10:04.525 this does not affect your deployments: 00:10:04.525 - DUP for metadata (-m dup) 00:10:04.525 - enabled no-holes (-O no-holes) 00:10:04.525 - enabled free-space-tree (-R free-space-tree) 00:10:04.525 00:10:04.525 Label: (null) 00:10:04.525 UUID: 8e6d9332-d5f5-4850-8f62-a77109cc6401 00:10:04.525 Node size: 16384 00:10:04.525 Sector size: 4096 00:10:04.525 Filesystem size: 510.00MiB 00:10:04.525 Block group profiles: 00:10:04.525 Data: single 8.00MiB 00:10:04.525 Metadata: DUP 32.00MiB 00:10:04.525 System: DUP 8.00MiB 00:10:04.525 SSD detected: yes 00:10:04.525 Zoned device: no 00:10:04.525 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:04.525 Runtime features: free-space-tree 00:10:04.525 Checksum: crc32c 00:10:04.525 Number of devices: 1 00:10:04.525 Devices: 00:10:04.525 ID SIZE PATH 00:10:04.525 1 510.00MiB /dev/nvme0n1p1 00:10:04.525 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:04.525 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:04.526 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.526 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65551 00:10:04.526 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.526 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.526 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.526 18:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.526 00:10:04.526 real 0m0.229s 00:10:04.526 user 0m0.026s 00:10:04.526 sys 0m0.077s 00:10:04.526 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.526 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:04.526 ************************************ 00:10:04.526 END TEST filesystem_btrfs 00:10:04.526 ************************************ 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.785 ************************************ 00:10:04.785 START TEST filesystem_xfs 00:10:04.785 ************************************ 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:04.785 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:04.785 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:04.785 = sectsz=512 attr=2, projid32bit=1 00:10:04.785 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:04.785 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:04.785 data = bsize=4096 blocks=130560, imaxpct=25 00:10:04.785 = sunit=0 swidth=0 blks 00:10:04.785 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:04.785 log =internal log bsize=4096 blocks=16384, version=2 00:10:04.785 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:04.785 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:05.353 Discarding blocks...Done. 00:10:05.353 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:05.353 18:35:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.886 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.886 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:07.886 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.886 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:07.886 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:07.886 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65551 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.887 ************************************ 00:10:07.887 END TEST filesystem_xfs 00:10:07.887 ************************************ 00:10:07.887 00:10:07.887 real 0m3.028s 00:10:07.887 user 0m0.026s 00:10:07.887 sys 0m0.061s 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65551 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65551 ']' 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65551 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65551 00:10:07.887 killing process with pid 65551 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65551' 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65551 00:10:07.887 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65551 00:10:08.453 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:08.453 00:10:08.453 real 0m9.498s 00:10:08.453 user 0m35.305s 00:10:08.453 sys 0m1.967s 00:10:08.453 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.453 ************************************ 00:10:08.453 END TEST nvmf_filesystem_no_in_capsule 00:10:08.453 ************************************ 00:10:08.453 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.712 ************************************ 00:10:08.712 START TEST nvmf_filesystem_in_capsule 00:10:08.712 ************************************ 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65868 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65868 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65868 ']' 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.712 18:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.712 [2024-07-15 18:35:43.023084] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:08.712 [2024-07-15 18:35:43.023202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.712 [2024-07-15 18:35:43.168609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.970 [2024-07-15 18:35:43.320287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.970 [2024-07-15 18:35:43.320577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.970 [2024-07-15 18:35:43.320693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.970 [2024-07-15 18:35:43.320744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.970 [2024-07-15 18:35:43.320773] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.970 [2024-07-15 18:35:43.321040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.970 [2024-07-15 18:35:43.321127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.970 [2024-07-15 18:35:43.321400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.970 [2024-07-15 18:35:43.321400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.537 18:35:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.537 18:35:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:09.537 18:35:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.537 18:35:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.537 18:35:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.537 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.537 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:09.537 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:09.537 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.537 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.795 [2024-07-15 18:35:44.023958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.795 Malloc1 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.795 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.796 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.053 [2024-07-15 18:35:44.283702] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.053 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:10.053 { 00:10:10.053 "aliases": [ 00:10:10.053 "0686f865-42a4-44f7-abc5-c3433efc0496" 00:10:10.053 ], 00:10:10.053 "assigned_rate_limits": { 00:10:10.053 "r_mbytes_per_sec": 0, 00:10:10.053 "rw_ios_per_sec": 0, 00:10:10.053 "rw_mbytes_per_sec": 0, 00:10:10.053 "w_mbytes_per_sec": 0 00:10:10.053 }, 00:10:10.053 "block_size": 512, 00:10:10.053 "claim_type": "exclusive_write", 00:10:10.053 "claimed": true, 00:10:10.053 "driver_specific": {}, 00:10:10.053 "memory_domains": [ 00:10:10.053 { 00:10:10.053 "dma_device_id": "system", 00:10:10.053 "dma_device_type": 1 00:10:10.053 }, 00:10:10.053 { 00:10:10.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.054 "dma_device_type": 2 00:10:10.054 } 00:10:10.054 ], 00:10:10.054 "name": "Malloc1", 00:10:10.054 "num_blocks": 1048576, 00:10:10.054 "product_name": "Malloc disk", 00:10:10.054 "supported_io_types": { 00:10:10.054 "abort": true, 00:10:10.054 "compare": false, 00:10:10.054 "compare_and_write": false, 00:10:10.054 "copy": true, 00:10:10.054 "flush": true, 00:10:10.054 "get_zone_info": false, 00:10:10.054 "nvme_admin": false, 00:10:10.054 "nvme_io": false, 00:10:10.054 "nvme_io_md": false, 00:10:10.054 "nvme_iov_md": false, 00:10:10.054 "read": true, 00:10:10.054 "reset": true, 00:10:10.054 "seek_data": false, 00:10:10.054 "seek_hole": false, 00:10:10.054 "unmap": true, 00:10:10.054 "write": true, 00:10:10.054 "write_zeroes": true, 00:10:10.054 "zcopy": true, 00:10:10.054 "zone_append": false, 00:10:10.054 "zone_management": false 00:10:10.054 }, 00:10:10.054 "uuid": "0686f865-42a4-44f7-abc5-c3433efc0496", 00:10:10.054 "zoned": false 00:10:10.054 } 00:10:10.054 ]' 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:10.054 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.312 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:10.312 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:10.312 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.312 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:10.312 18:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:12.214 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:12.215 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:12.215 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:12.215 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:12.215 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:12.472 18:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.407 ************************************ 00:10:13.407 START TEST filesystem_in_capsule_ext4 00:10:13.407 ************************************ 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:13.407 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:13.407 mke2fs 1.46.5 (30-Dec-2021) 00:10:13.713 Discarding device blocks: 0/522240 done 00:10:13.713 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:13.713 Filesystem UUID: 52996052-1b4a-4e19-b164-d341e3864044 00:10:13.713 Superblock backups stored on blocks: 00:10:13.713 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:13.713 00:10:13.713 Allocating group tables: 0/64 done 00:10:13.713 Writing inode tables: 0/64 done 00:10:13.713 Creating journal (8192 blocks): done 00:10:13.714 Writing superblocks and filesystem accounting information: 0/64 done 00:10:13.714 00:10:13.714 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:13.714 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:13.714 18:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65868 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:13.714 00:10:13.714 real 0m0.418s 00:10:13.714 user 0m0.032s 00:10:13.714 sys 0m0.066s 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.714 ************************************ 00:10:13.714 END TEST filesystem_in_capsule_ext4 00:10:13.714 ************************************ 00:10:13.714 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.972 ************************************ 00:10:13.972 START TEST filesystem_in_capsule_btrfs 00:10:13.972 ************************************ 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:13.972 btrfs-progs v6.6.2 00:10:13.972 See https://btrfs.readthedocs.io for more information. 00:10:13.972 00:10:13.972 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:13.972 NOTE: several default settings have changed in version 5.15, please make sure 00:10:13.972 this does not affect your deployments: 00:10:13.972 - DUP for metadata (-m dup) 00:10:13.972 - enabled no-holes (-O no-holes) 00:10:13.972 - enabled free-space-tree (-R free-space-tree) 00:10:13.972 00:10:13.972 Label: (null) 00:10:13.972 UUID: dd2666bb-b6e8-4706-a524-d1ef5729e05c 00:10:13.972 Node size: 16384 00:10:13.972 Sector size: 4096 00:10:13.972 Filesystem size: 510.00MiB 00:10:13.972 Block group profiles: 00:10:13.972 Data: single 8.00MiB 00:10:13.972 Metadata: DUP 32.00MiB 00:10:13.972 System: DUP 8.00MiB 00:10:13.972 SSD detected: yes 00:10:13.972 Zoned device: no 00:10:13.972 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:13.972 Runtime features: free-space-tree 00:10:13.972 Checksum: crc32c 00:10:13.972 Number of devices: 1 00:10:13.972 Devices: 00:10:13.972 ID SIZE PATH 00:10:13.972 1 510.00MiB /dev/nvme0n1p1 00:10:13.972 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:13.972 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65868 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:14.231 ************************************ 00:10:14.231 END TEST filesystem_in_capsule_btrfs 00:10:14.231 ************************************ 00:10:14.231 00:10:14.231 real 0m0.305s 00:10:14.231 user 0m0.027s 00:10:14.231 sys 0m0.080s 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.231 ************************************ 00:10:14.231 START TEST filesystem_in_capsule_xfs 00:10:14.231 ************************************ 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:14.231 18:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:14.490 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:14.490 = sectsz=512 attr=2, projid32bit=1 00:10:14.491 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:14.491 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:14.491 data = bsize=4096 blocks=130560, imaxpct=25 00:10:14.491 = sunit=0 swidth=0 blks 00:10:14.491 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:14.491 log =internal log bsize=4096 blocks=16384, version=2 00:10:14.491 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:14.491 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:15.058 Discarding blocks...Done. 00:10:15.058 18:35:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:15.058 18:35:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65868 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:16.958 ************************************ 00:10:16.958 END TEST filesystem_in_capsule_xfs 00:10:16.958 ************************************ 00:10:16.958 00:10:16.958 real 0m2.669s 00:10:16.958 user 0m0.027s 00:10:16.958 sys 0m0.067s 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.958 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65868 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65868 ']' 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65868 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.959 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65868 00:10:17.218 killing process with pid 65868 00:10:17.218 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.218 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.218 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65868' 00:10:17.218 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65868 00:10:17.218 18:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65868 00:10:17.786 18:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:17.786 00:10:17.786 real 0m9.121s 00:10:17.786 user 0m33.816s 00:10:17.786 sys 0m1.964s 00:10:17.786 18:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.786 18:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.786 ************************************ 00:10:17.786 END TEST nvmf_filesystem_in_capsule 00:10:17.786 ************************************ 00:10:17.786 18:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:17.786 18:35:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.787 rmmod nvme_tcp 00:10:17.787 rmmod nvme_fabrics 00:10:17.787 rmmod nvme_keyring 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:17.787 00:10:17.787 real 0m19.519s 00:10:17.787 user 1m9.373s 00:10:17.787 sys 0m4.387s 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.787 ************************************ 00:10:17.787 END TEST nvmf_filesystem 00:10:17.787 ************************************ 00:10:17.787 18:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.046 18:35:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:18.046 18:35:52 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:18.046 18:35:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:18.046 18:35:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.046 18:35:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.046 ************************************ 00:10:18.046 START TEST nvmf_target_discovery 00:10:18.046 ************************************ 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:18.046 * Looking for test storage... 00:10:18.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.046 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:18.047 Cannot find device "nvmf_tgt_br" 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.047 Cannot find device "nvmf_tgt_br2" 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:18.047 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:18.326 Cannot find device "nvmf_tgt_br" 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:18.326 Cannot find device "nvmf_tgt_br2" 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.326 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:18.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:10:18.589 00:10:18.589 --- 10.0.0.2 ping statistics --- 00:10:18.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.589 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:18.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:18.589 00:10:18.589 --- 10.0.0.3 ping statistics --- 00:10:18.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.589 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:10:18.589 00:10:18.589 --- 10.0.0.1 ping statistics --- 00:10:18.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.589 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66330 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66330 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66330 ']' 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.589 18:35:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:18.589 [2024-07-15 18:35:52.913830] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:18.589 [2024-07-15 18:35:52.913932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.589 [2024-07-15 18:35:53.053666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.848 [2024-07-15 18:35:53.211918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.848 [2024-07-15 18:35:53.212247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.848 [2024-07-15 18:35:53.212430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.848 [2024-07-15 18:35:53.212481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.848 [2024-07-15 18:35:53.212509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.848 [2024-07-15 18:35:53.212731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.848 [2024-07-15 18:35:53.212910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.848 [2024-07-15 18:35:53.213014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.848 [2024-07-15 18:35:53.213019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.413 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.413 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:10:19.413 18:35:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.413 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.413 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 [2024-07-15 18:35:53.943054] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 Null1 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 [2024-07-15 18:35:54.008627] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 Null2 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 Null3 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.670 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 Null4 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.671 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 4420 00:10:19.929 00:10:19.929 Discovery Log Number of Records 6, Generation counter 6 00:10:19.929 =====Discovery Log Entry 0====== 00:10:19.929 trtype: tcp 00:10:19.929 adrfam: ipv4 00:10:19.929 subtype: current discovery subsystem 00:10:19.929 treq: not required 00:10:19.929 portid: 0 00:10:19.929 trsvcid: 4420 00:10:19.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:19.929 traddr: 10.0.0.2 00:10:19.929 eflags: explicit discovery connections, duplicate discovery information 00:10:19.929 sectype: none 00:10:19.929 =====Discovery Log Entry 1====== 00:10:19.929 trtype: tcp 00:10:19.929 adrfam: ipv4 00:10:19.929 subtype: nvme subsystem 00:10:19.929 treq: not required 00:10:19.929 portid: 0 00:10:19.929 trsvcid: 4420 00:10:19.929 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:19.929 traddr: 10.0.0.2 00:10:19.929 eflags: none 00:10:19.929 sectype: none 00:10:19.929 =====Discovery Log Entry 2====== 00:10:19.929 trtype: tcp 00:10:19.929 adrfam: ipv4 00:10:19.929 subtype: nvme subsystem 00:10:19.929 treq: not required 00:10:19.929 portid: 0 00:10:19.929 trsvcid: 4420 00:10:19.929 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:19.929 traddr: 10.0.0.2 00:10:19.929 eflags: none 00:10:19.929 sectype: none 00:10:19.929 =====Discovery Log Entry 3====== 00:10:19.929 trtype: tcp 00:10:19.929 adrfam: ipv4 00:10:19.929 subtype: nvme subsystem 00:10:19.929 treq: not required 00:10:19.929 portid: 0 00:10:19.929 trsvcid: 4420 00:10:19.929 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:19.929 traddr: 10.0.0.2 00:10:19.929 eflags: none 00:10:19.929 sectype: none 00:10:19.929 =====Discovery Log Entry 4====== 00:10:19.929 trtype: tcp 00:10:19.929 adrfam: ipv4 00:10:19.929 subtype: nvme subsystem 00:10:19.929 treq: not required 00:10:19.929 portid: 0 00:10:19.929 trsvcid: 4420 00:10:19.929 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:19.929 traddr: 10.0.0.2 00:10:19.929 eflags: none 00:10:19.929 sectype: none 00:10:19.929 =====Discovery Log Entry 5====== 00:10:19.929 trtype: tcp 00:10:19.929 adrfam: ipv4 00:10:19.929 subtype: discovery subsystem referral 00:10:19.929 treq: not required 00:10:19.929 portid: 0 00:10:19.929 trsvcid: 4430 00:10:19.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:19.929 traddr: 10.0.0.2 00:10:19.929 eflags: none 00:10:19.929 sectype: none 00:10:19.929 Perform nvmf subsystem discovery via RPC 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 [ 00:10:19.929 { 00:10:19.929 "allow_any_host": true, 00:10:19.929 "hosts": [], 00:10:19.929 "listen_addresses": [ 00:10:19.929 { 00:10:19.929 "adrfam": "IPv4", 00:10:19.929 "traddr": "10.0.0.2", 00:10:19.929 "trsvcid": "4420", 00:10:19.929 "trtype": "TCP" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:19.929 "subtype": "Discovery" 00:10:19.929 }, 00:10:19.929 { 00:10:19.929 "allow_any_host": true, 00:10:19.929 "hosts": [], 00:10:19.929 "listen_addresses": [ 00:10:19.929 { 00:10:19.929 "adrfam": "IPv4", 00:10:19.929 "traddr": "10.0.0.2", 00:10:19.929 "trsvcid": "4420", 00:10:19.929 "trtype": "TCP" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "max_cntlid": 65519, 00:10:19.929 "max_namespaces": 32, 00:10:19.929 "min_cntlid": 1, 00:10:19.929 "model_number": "SPDK bdev Controller", 00:10:19.929 "namespaces": [ 00:10:19.929 { 00:10:19.929 "bdev_name": "Null1", 00:10:19.929 "name": "Null1", 00:10:19.929 "nguid": "58E33D66F50F45D09CDC9EB5173C07C5", 00:10:19.929 "nsid": 1, 00:10:19.929 "uuid": "58e33d66-f50f-45d0-9cdc-9eb5173c07c5" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:19.929 "serial_number": "SPDK00000000000001", 00:10:19.929 "subtype": "NVMe" 00:10:19.929 }, 00:10:19.929 { 00:10:19.929 "allow_any_host": true, 00:10:19.929 "hosts": [], 00:10:19.929 "listen_addresses": [ 00:10:19.929 { 00:10:19.929 "adrfam": "IPv4", 00:10:19.929 "traddr": "10.0.0.2", 00:10:19.929 "trsvcid": "4420", 00:10:19.929 "trtype": "TCP" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "max_cntlid": 65519, 00:10:19.929 "max_namespaces": 32, 00:10:19.929 "min_cntlid": 1, 00:10:19.929 "model_number": "SPDK bdev Controller", 00:10:19.929 "namespaces": [ 00:10:19.929 { 00:10:19.929 "bdev_name": "Null2", 00:10:19.929 "name": "Null2", 00:10:19.929 "nguid": "9F7F7B44DE814308A97DF7825D7F88F6", 00:10:19.929 "nsid": 1, 00:10:19.929 "uuid": "9f7f7b44-de81-4308-a97d-f7825d7f88f6" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:19.929 "serial_number": "SPDK00000000000002", 00:10:19.929 "subtype": "NVMe" 00:10:19.929 }, 00:10:19.929 { 00:10:19.929 "allow_any_host": true, 00:10:19.929 "hosts": [], 00:10:19.929 "listen_addresses": [ 00:10:19.929 { 00:10:19.929 "adrfam": "IPv4", 00:10:19.929 "traddr": "10.0.0.2", 00:10:19.929 "trsvcid": "4420", 00:10:19.929 "trtype": "TCP" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "max_cntlid": 65519, 00:10:19.929 "max_namespaces": 32, 00:10:19.929 "min_cntlid": 1, 00:10:19.929 "model_number": "SPDK bdev Controller", 00:10:19.929 "namespaces": [ 00:10:19.929 { 00:10:19.929 "bdev_name": "Null3", 00:10:19.929 "name": "Null3", 00:10:19.929 "nguid": "DF98379E5F184D1791436A3901F4D780", 00:10:19.929 "nsid": 1, 00:10:19.929 "uuid": "df98379e-5f18-4d17-9143-6a3901f4d780" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:19.929 "serial_number": "SPDK00000000000003", 00:10:19.929 "subtype": "NVMe" 00:10:19.929 }, 00:10:19.929 { 00:10:19.929 "allow_any_host": true, 00:10:19.929 "hosts": [], 00:10:19.929 "listen_addresses": [ 00:10:19.929 { 00:10:19.929 "adrfam": "IPv4", 00:10:19.929 "traddr": "10.0.0.2", 00:10:19.929 "trsvcid": "4420", 00:10:19.929 "trtype": "TCP" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "max_cntlid": 65519, 00:10:19.929 "max_namespaces": 32, 00:10:19.929 "min_cntlid": 1, 00:10:19.929 "model_number": "SPDK bdev Controller", 00:10:19.929 "namespaces": [ 00:10:19.929 { 00:10:19.929 "bdev_name": "Null4", 00:10:19.929 "name": "Null4", 00:10:19.929 "nguid": "4C20555F0280439085ED2A659097048F", 00:10:19.929 "nsid": 1, 00:10:19.929 "uuid": "4c20555f-0280-4390-85ed-2a659097048f" 00:10:19.929 } 00:10:19.929 ], 00:10:19.929 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:19.929 "serial_number": "SPDK00000000000004", 00:10:19.929 "subtype": "NVMe" 00:10:19.929 } 00:10:19.929 ] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:19.929 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.930 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.930 rmmod nvme_tcp 00:10:19.930 rmmod nvme_fabrics 00:10:20.188 rmmod nvme_keyring 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66330 ']' 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66330 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66330 ']' 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66330 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66330 00:10:20.189 killing process with pid 66330 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66330' 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66330 00:10:20.189 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66330 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:20.447 00:10:20.447 real 0m2.549s 00:10:20.447 user 0m6.386s 00:10:20.447 sys 0m0.834s 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.447 ************************************ 00:10:20.447 END TEST nvmf_target_discovery 00:10:20.447 18:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:20.447 ************************************ 00:10:20.447 18:35:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:20.447 18:35:54 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:20.447 18:35:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:20.447 18:35:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.447 18:35:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.447 ************************************ 00:10:20.447 START TEST nvmf_referrals 00:10:20.447 ************************************ 00:10:20.447 18:35:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:20.704 * Looking for test storage... 00:10:20.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.704 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:20.705 Cannot find device "nvmf_tgt_br" 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.705 Cannot find device "nvmf_tgt_br2" 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:20.705 Cannot find device "nvmf_tgt_br" 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:20.705 Cannot find device "nvmf_tgt_br2" 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:20.705 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:20.964 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.965 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:21.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:10:21.223 00:10:21.223 --- 10.0.0.2 ping statistics --- 00:10:21.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.223 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:21.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.223 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:10:21.223 00:10:21.223 --- 10.0.0.3 ping statistics --- 00:10:21.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.223 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:21.223 00:10:21.223 --- 10.0.0.1 ping statistics --- 00:10:21.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.223 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66561 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66561 00:10:21.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66561 ']' 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.223 18:35:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.223 [2024-07-15 18:35:55.576208] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:21.223 [2024-07-15 18:35:55.576370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.480 [2024-07-15 18:35:55.732501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.480 [2024-07-15 18:35:55.903290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.480 [2024-07-15 18:35:55.903364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.480 [2024-07-15 18:35:55.903379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.480 [2024-07-15 18:35:55.903392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.480 [2024-07-15 18:35:55.903404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.480 [2024-07-15 18:35:55.903627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.480 [2024-07-15 18:35:55.903741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.480 [2024-07-15 18:35:55.904127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.480 [2024-07-15 18:35:55.904499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 [2024-07-15 18:35:56.621534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 [2024-07-15 18:35:56.648124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:22.416 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.676 18:35:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.676 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:22.935 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:23.194 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.453 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:23.453 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:23.453 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.453 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.453 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.453 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.454 rmmod nvme_tcp 00:10:23.454 rmmod nvme_fabrics 00:10:23.454 rmmod nvme_keyring 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66561 ']' 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66561 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66561 ']' 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66561 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66561 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.454 killing process with pid 66561 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66561' 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66561 00:10:23.454 18:35:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66561 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:24.020 00:10:24.020 real 0m3.382s 00:10:24.020 user 0m9.972s 00:10:24.020 sys 0m1.114s 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.020 ************************************ 00:10:24.020 END TEST nvmf_referrals 00:10:24.020 ************************************ 00:10:24.020 18:35:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.020 18:35:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:24.020 18:35:58 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:24.020 18:35:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:24.020 18:35:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.020 18:35:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:24.020 ************************************ 00:10:24.020 START TEST nvmf_connect_disconnect 00:10:24.021 ************************************ 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:24.021 * Looking for test storage... 00:10:24.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:24.021 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:24.279 Cannot find device "nvmf_tgt_br" 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.279 Cannot find device "nvmf_tgt_br2" 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:24.279 Cannot find device "nvmf_tgt_br" 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:24.279 Cannot find device "nvmf_tgt_br2" 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:24.279 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:24.537 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:24.537 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:24.537 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:24.537 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:24.537 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:24.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:24.537 00:10:24.537 --- 10.0.0.2 ping statistics --- 00:10:24.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.537 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:24.537 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:24.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:24.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:24.537 00:10:24.537 --- 10.0.0.3 ping statistics --- 00:10:24.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.537 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:24.537 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:24.538 00:10:24.538 --- 10.0.0.1 ping statistics --- 00:10:24.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.538 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66864 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66864 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66864 ']' 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.538 18:35:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:24.538 [2024-07-15 18:35:58.923211] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:24.538 [2024-07-15 18:35:58.923336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.796 [2024-07-15 18:35:59.071559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.796 [2024-07-15 18:35:59.247924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.796 [2024-07-15 18:35:59.248017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.796 [2024-07-15 18:35:59.248032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.796 [2024-07-15 18:35:59.248046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.796 [2024-07-15 18:35:59.248057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.796 [2024-07-15 18:35:59.248273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.796 [2024-07-15 18:35:59.248465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.796 [2024-07-15 18:35:59.249304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.796 [2024-07-15 18:35:59.249310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.729 18:35:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.729 18:35:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:10:25.729 18:35:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.729 18:35:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:25.729 18:35:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:25.729 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.729 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:25.729 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.729 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:25.729 [2024-07-15 18:36:00.018741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.729 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.729 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:25.729 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 [2024-07-15 18:36:00.094661] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:25.730 18:36:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:28.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.147 rmmod nvme_tcp 00:10:37.147 rmmod nvme_fabrics 00:10:37.147 rmmod nvme_keyring 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66864 ']' 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66864 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66864 ']' 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66864 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66864 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.147 killing process with pid 66864 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66864' 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66864 00:10:37.147 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66864 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:37.458 00:10:37.458 real 0m13.567s 00:10:37.458 user 0m49.218s 00:10:37.458 sys 0m2.218s 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.458 ************************************ 00:10:37.458 18:36:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 END TEST nvmf_connect_disconnect 00:10:37.458 ************************************ 00:10:37.717 18:36:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:37.717 18:36:11 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:37.717 18:36:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:37.717 18:36:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.717 18:36:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.717 ************************************ 00:10:37.717 START TEST nvmf_multitarget 00:10:37.717 ************************************ 00:10:37.717 18:36:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:37.717 * Looking for test storage... 00:10:37.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:37.718 Cannot find device "nvmf_tgt_br" 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.718 Cannot find device "nvmf_tgt_br2" 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:37.718 Cannot find device "nvmf_tgt_br" 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:37.718 Cannot find device "nvmf_tgt_br2" 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:10:37.718 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:37.977 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:37.977 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.978 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:37.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:10:37.978 00:10:37.978 --- 10.0.0.2 ping statistics --- 00:10:37.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.978 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:38.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:38.236 00:10:38.236 --- 10.0.0.3 ping statistics --- 00:10:38.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.236 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:38.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:38.236 00:10:38.236 --- 10.0.0.1 ping statistics --- 00:10:38.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.236 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:38.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67269 00:10:38.236 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67269 00:10:38.237 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67269 ']' 00:10:38.237 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.237 18:36:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.237 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.237 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.237 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.237 18:36:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:38.237 [2024-07-15 18:36:12.563127] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:38.237 [2024-07-15 18:36:12.563228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.237 [2024-07-15 18:36:12.713214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.496 [2024-07-15 18:36:12.883149] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.496 [2024-07-15 18:36:12.883225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.496 [2024-07-15 18:36:12.883241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.496 [2024-07-15 18:36:12.883254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.496 [2024-07-15 18:36:12.883265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.496 [2024-07-15 18:36:12.883497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.496 [2024-07-15 18:36:12.883684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.496 [2024-07-15 18:36:12.884608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.496 [2024-07-15 18:36:12.884615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.064 18:36:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.064 18:36:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:10:39.064 18:36:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.064 18:36:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:39.064 18:36:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:39.322 18:36:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.322 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:39.322 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:39.322 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:39.322 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:39.322 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:39.580 "nvmf_tgt_1" 00:10:39.580 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:39.580 "nvmf_tgt_2" 00:10:39.580 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:39.580 18:36:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:39.580 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:39.580 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:39.839 true 00:10:39.839 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:39.839 true 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.098 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.098 rmmod nvme_tcp 00:10:40.098 rmmod nvme_fabrics 00:10:40.098 rmmod nvme_keyring 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67269 ']' 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67269 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67269 ']' 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67269 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67269 00:10:40.357 killing process with pid 67269 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67269' 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67269 00:10:40.357 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67269 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.616 18:36:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.616 18:36:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:40.616 00:10:40.616 real 0m3.031s 00:10:40.616 user 0m9.230s 00:10:40.616 sys 0m0.904s 00:10:40.616 18:36:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.616 18:36:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:40.616 ************************************ 00:10:40.616 END TEST nvmf_multitarget 00:10:40.616 ************************************ 00:10:40.616 18:36:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.616 18:36:15 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:40.616 18:36:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.616 18:36:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.616 18:36:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.616 ************************************ 00:10:40.616 START TEST nvmf_rpc 00:10:40.616 ************************************ 00:10:40.616 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:40.875 * Looking for test storage... 00:10:40.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.875 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.876 Cannot find device "nvmf_tgt_br" 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.876 Cannot find device "nvmf_tgt_br2" 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.876 Cannot find device "nvmf_tgt_br" 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.876 Cannot find device "nvmf_tgt_br2" 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.876 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:41.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:10:41.136 00:10:41.136 --- 10.0.0.2 ping statistics --- 00:10:41.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.136 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:41.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:10:41.136 00:10:41.136 --- 10.0.0.3 ping statistics --- 00:10:41.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.136 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:41.136 00:10:41.136 --- 10.0.0.1 ping statistics --- 00:10:41.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.136 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67500 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67500 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67500 ']' 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.136 18:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.394 [2024-07-15 18:36:15.625513] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:41.394 [2024-07-15 18:36:15.625612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.394 [2024-07-15 18:36:15.767814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.652 [2024-07-15 18:36:15.886022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.652 [2024-07-15 18:36:15.886090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.652 [2024-07-15 18:36:15.886105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.653 [2024-07-15 18:36:15.886119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.653 [2024-07-15 18:36:15.886130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.653 [2024-07-15 18:36:15.886304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.653 [2024-07-15 18:36:15.886680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.653 [2024-07-15 18:36:15.887530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.653 [2024-07-15 18:36:15.887536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:42.218 "poll_groups": [ 00:10:42.218 { 00:10:42.218 "admin_qpairs": 0, 00:10:42.218 "completed_nvme_io": 0, 00:10:42.218 "current_admin_qpairs": 0, 00:10:42.218 "current_io_qpairs": 0, 00:10:42.218 "io_qpairs": 0, 00:10:42.218 "name": "nvmf_tgt_poll_group_000", 00:10:42.218 "pending_bdev_io": 0, 00:10:42.218 "transports": [] 00:10:42.218 }, 00:10:42.218 { 00:10:42.218 "admin_qpairs": 0, 00:10:42.218 "completed_nvme_io": 0, 00:10:42.218 "current_admin_qpairs": 0, 00:10:42.218 "current_io_qpairs": 0, 00:10:42.218 "io_qpairs": 0, 00:10:42.218 "name": "nvmf_tgt_poll_group_001", 00:10:42.218 "pending_bdev_io": 0, 00:10:42.218 "transports": [] 00:10:42.218 }, 00:10:42.218 { 00:10:42.218 "admin_qpairs": 0, 00:10:42.218 "completed_nvme_io": 0, 00:10:42.218 "current_admin_qpairs": 0, 00:10:42.218 "current_io_qpairs": 0, 00:10:42.218 "io_qpairs": 0, 00:10:42.218 "name": "nvmf_tgt_poll_group_002", 00:10:42.218 "pending_bdev_io": 0, 00:10:42.218 "transports": [] 00:10:42.218 }, 00:10:42.218 { 00:10:42.218 "admin_qpairs": 0, 00:10:42.218 "completed_nvme_io": 0, 00:10:42.218 "current_admin_qpairs": 0, 00:10:42.218 "current_io_qpairs": 0, 00:10:42.218 "io_qpairs": 0, 00:10:42.218 "name": "nvmf_tgt_poll_group_003", 00:10:42.218 "pending_bdev_io": 0, 00:10:42.218 "transports": [] 00:10:42.218 } 00:10:42.218 ], 00:10:42.218 "tick_rate": 2100000000 00:10:42.218 }' 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:42.218 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:42.219 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:42.219 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:42.219 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:42.219 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.478 [2024-07-15 18:36:16.742921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:42.478 "poll_groups": [ 00:10:42.478 { 00:10:42.478 "admin_qpairs": 0, 00:10:42.478 "completed_nvme_io": 0, 00:10:42.478 "current_admin_qpairs": 0, 00:10:42.478 "current_io_qpairs": 0, 00:10:42.478 "io_qpairs": 0, 00:10:42.478 "name": "nvmf_tgt_poll_group_000", 00:10:42.478 "pending_bdev_io": 0, 00:10:42.478 "transports": [ 00:10:42.478 { 00:10:42.478 "trtype": "TCP" 00:10:42.478 } 00:10:42.478 ] 00:10:42.478 }, 00:10:42.478 { 00:10:42.478 "admin_qpairs": 0, 00:10:42.478 "completed_nvme_io": 0, 00:10:42.478 "current_admin_qpairs": 0, 00:10:42.478 "current_io_qpairs": 0, 00:10:42.478 "io_qpairs": 0, 00:10:42.478 "name": "nvmf_tgt_poll_group_001", 00:10:42.478 "pending_bdev_io": 0, 00:10:42.478 "transports": [ 00:10:42.478 { 00:10:42.478 "trtype": "TCP" 00:10:42.478 } 00:10:42.478 ] 00:10:42.478 }, 00:10:42.478 { 00:10:42.478 "admin_qpairs": 0, 00:10:42.478 "completed_nvme_io": 0, 00:10:42.478 "current_admin_qpairs": 0, 00:10:42.478 "current_io_qpairs": 0, 00:10:42.478 "io_qpairs": 0, 00:10:42.478 "name": "nvmf_tgt_poll_group_002", 00:10:42.478 "pending_bdev_io": 0, 00:10:42.478 "transports": [ 00:10:42.478 { 00:10:42.478 "trtype": "TCP" 00:10:42.478 } 00:10:42.478 ] 00:10:42.478 }, 00:10:42.478 { 00:10:42.478 "admin_qpairs": 0, 00:10:42.478 "completed_nvme_io": 0, 00:10:42.478 "current_admin_qpairs": 0, 00:10:42.478 "current_io_qpairs": 0, 00:10:42.478 "io_qpairs": 0, 00:10:42.478 "name": "nvmf_tgt_poll_group_003", 00:10:42.478 "pending_bdev_io": 0, 00:10:42.478 "transports": [ 00:10:42.478 { 00:10:42.478 "trtype": "TCP" 00:10:42.478 } 00:10:42.478 ] 00:10:42.478 } 00:10:42.478 ], 00:10:42.478 "tick_rate": 2100000000 00:10:42.478 }' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.478 Malloc1 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.478 [2024-07-15 18:36:16.942573] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -a 10.0.0.2 -s 4420 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -a 10.0.0.2 -s 4420 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:42.478 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:42.479 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -a 10.0.0.2 -s 4420 00:10:42.737 [2024-07-15 18:36:16.970932] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08' 00:10:42.737 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:42.737 could not add new controller: failed to write to nvme-fabrics device 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.737 18:36:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.738 18:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.738 18:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:42.738 18:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.738 18:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:42.738 18:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.269 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.270 [2024-07-15 18:36:19.302899] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08' 00:10:45.270 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:45.270 could not add new controller: failed to write to nvme-fabrics device 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:45.270 18:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.173 [2024-07-15 18:36:21.602859] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.173 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.445 18:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.445 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:47.445 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.445 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:47.445 18:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:49.347 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:49.347 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:49.347 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.347 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:49.347 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.347 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:49.347 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:49.605 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.606 [2024-07-15 18:36:23.908469] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.606 18:36:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.865 18:36:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.865 18:36:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.865 18:36:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.865 18:36:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:49.865 18:36:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.775 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.775 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.775 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.775 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:51.775 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.775 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:51.775 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.033 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.034 [2024-07-15 18:36:26.338391] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.034 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.293 18:36:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.293 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:52.293 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.293 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:52.293 18:36:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.199 [2024-07-15 18:36:28.651540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.199 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.457 18:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.457 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:54.457 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.457 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:54.457 18:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 [2024-07-15 18:36:30.977287] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.984 18:36:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.984 18:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.984 18:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:56.984 18:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.984 18:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:56.984 18:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:58.885 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 [2024-07-15 18:36:33.423481] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 [2024-07-15 18:36:33.471576] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.144 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 [2024-07-15 18:36:33.519638] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 [2024-07-15 18:36:33.575769] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.145 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.403 [2024-07-15 18:36:33.627872] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:59.404 "poll_groups": [ 00:10:59.404 { 00:10:59.404 "admin_qpairs": 2, 00:10:59.404 "completed_nvme_io": 66, 00:10:59.404 "current_admin_qpairs": 0, 00:10:59.404 "current_io_qpairs": 0, 00:10:59.404 "io_qpairs": 16, 00:10:59.404 "name": "nvmf_tgt_poll_group_000", 00:10:59.404 "pending_bdev_io": 0, 00:10:59.404 "transports": [ 00:10:59.404 { 00:10:59.404 "trtype": "TCP" 00:10:59.404 } 00:10:59.404 ] 00:10:59.404 }, 00:10:59.404 { 00:10:59.404 "admin_qpairs": 3, 00:10:59.404 "completed_nvme_io": 117, 00:10:59.404 "current_admin_qpairs": 0, 00:10:59.404 "current_io_qpairs": 0, 00:10:59.404 "io_qpairs": 17, 00:10:59.404 "name": "nvmf_tgt_poll_group_001", 00:10:59.404 "pending_bdev_io": 0, 00:10:59.404 "transports": [ 00:10:59.404 { 00:10:59.404 "trtype": "TCP" 00:10:59.404 } 00:10:59.404 ] 00:10:59.404 }, 00:10:59.404 { 00:10:59.404 "admin_qpairs": 1, 00:10:59.404 "completed_nvme_io": 168, 00:10:59.404 "current_admin_qpairs": 0, 00:10:59.404 "current_io_qpairs": 0, 00:10:59.404 "io_qpairs": 19, 00:10:59.404 "name": "nvmf_tgt_poll_group_002", 00:10:59.404 "pending_bdev_io": 0, 00:10:59.404 "transports": [ 00:10:59.404 { 00:10:59.404 "trtype": "TCP" 00:10:59.404 } 00:10:59.404 ] 00:10:59.404 }, 00:10:59.404 { 00:10:59.404 "admin_qpairs": 1, 00:10:59.404 "completed_nvme_io": 69, 00:10:59.404 "current_admin_qpairs": 0, 00:10:59.404 "current_io_qpairs": 0, 00:10:59.404 "io_qpairs": 18, 00:10:59.404 "name": "nvmf_tgt_poll_group_003", 00:10:59.404 "pending_bdev_io": 0, 00:10:59.404 "transports": [ 00:10:59.404 { 00:10:59.404 "trtype": "TCP" 00:10:59.404 } 00:10:59.404 ] 00:10:59.404 } 00:10:59.404 ], 00:10:59.404 "tick_rate": 2100000000 00:10:59.404 }' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.404 rmmod nvme_tcp 00:10:59.404 rmmod nvme_fabrics 00:10:59.404 rmmod nvme_keyring 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67500 ']' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67500 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67500 ']' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67500 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.404 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67500 00:10:59.704 killing process with pid 67500 00:10:59.704 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.704 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.704 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67500' 00:10:59.704 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67500 00:10:59.704 18:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67500 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:59.962 00:10:59.962 real 0m19.210s 00:10:59.962 user 1m11.579s 00:10:59.962 sys 0m2.983s 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.962 ************************************ 00:10:59.962 END TEST nvmf_rpc 00:10:59.962 ************************************ 00:10:59.962 18:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 18:36:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:59.962 18:36:34 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:59.962 18:36:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.962 18:36:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.962 18:36:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 ************************************ 00:10:59.962 START TEST nvmf_invalid 00:10:59.962 ************************************ 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:59.963 * Looking for test storage... 00:10:59.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.963 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:00.221 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:00.222 Cannot find device "nvmf_tgt_br" 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.222 Cannot find device "nvmf_tgt_br2" 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:00.222 Cannot find device "nvmf_tgt_br" 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:00.222 Cannot find device "nvmf_tgt_br2" 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:00.222 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.480 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:00.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:11:00.480 00:11:00.480 --- 10.0.0.2 ping statistics --- 00:11:00.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.481 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:00.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:11:00.481 00:11:00.481 --- 10.0.0.3 ping statistics --- 00:11:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.481 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:00.481 00:11:00.481 --- 10.0.0.1 ping statistics --- 00:11:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.481 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=68017 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 68017 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 68017 ']' 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.481 18:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:00.481 [2024-07-15 18:36:34.890690] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:00.481 [2024-07-15 18:36:34.890782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.740 [2024-07-15 18:36:35.024560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.740 [2024-07-15 18:36:35.179184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.740 [2024-07-15 18:36:35.179257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.740 [2024-07-15 18:36:35.179269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.740 [2024-07-15 18:36:35.179279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.740 [2024-07-15 18:36:35.179288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.740 [2024-07-15 18:36:35.179498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.740 [2024-07-15 18:36:35.179552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.740 [2024-07-15 18:36:35.180240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.740 [2024-07-15 18:36:35.180240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:01.674 18:36:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15220 00:11:01.933 [2024-07-15 18:36:36.255437] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:01.933 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 18:36:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15220 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:01.933 request: 00:11:01.933 { 00:11:01.933 "method": "nvmf_create_subsystem", 00:11:01.933 "params": { 00:11:01.933 "nqn": "nqn.2016-06.io.spdk:cnode15220", 00:11:01.933 "tgt_name": "foobar" 00:11:01.933 } 00:11:01.933 } 00:11:01.933 Got JSON-RPC error response 00:11:01.933 GoRPCClient: error on JSON-RPC call' 00:11:01.933 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 18:36:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15220 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:01.933 request: 00:11:01.933 { 00:11:01.933 "method": "nvmf_create_subsystem", 00:11:01.933 "params": { 00:11:01.933 "nqn": "nqn.2016-06.io.spdk:cnode15220", 00:11:01.933 "tgt_name": "foobar" 00:11:01.933 } 00:11:01.933 } 00:11:01.933 Got JSON-RPC error response 00:11:01.933 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:01.933 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:01.933 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13458 00:11:02.224 [2024-07-15 18:36:36.539924] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13458: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:02.225 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 18:36:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13458 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:02.225 request: 00:11:02.225 { 00:11:02.225 "method": "nvmf_create_subsystem", 00:11:02.225 "params": { 00:11:02.225 "nqn": "nqn.2016-06.io.spdk:cnode13458", 00:11:02.225 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:02.225 } 00:11:02.225 } 00:11:02.225 Got JSON-RPC error response 00:11:02.225 GoRPCClient: error on JSON-RPC call' 00:11:02.225 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 18:36:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13458 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:02.225 request: 00:11:02.225 { 00:11:02.225 "method": "nvmf_create_subsystem", 00:11:02.225 "params": { 00:11:02.225 "nqn": "nqn.2016-06.io.spdk:cnode13458", 00:11:02.225 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:02.225 } 00:11:02.225 } 00:11:02.225 Got JSON-RPC error response 00:11:02.225 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:02.225 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:02.225 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3686 00:11:02.483 [2024-07-15 18:36:36.876448] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3686: invalid model number 'SPDK_Controller' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 18:36:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode3686], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:02.483 request: 00:11:02.483 { 00:11:02.483 "method": "nvmf_create_subsystem", 00:11:02.483 "params": { 00:11:02.483 "nqn": "nqn.2016-06.io.spdk:cnode3686", 00:11:02.483 "model_number": "SPDK_Controller\u001f" 00:11:02.483 } 00:11:02.483 } 00:11:02.483 Got JSON-RPC error response 00:11:02.483 GoRPCClient: error on JSON-RPC call' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 18:36:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode3686], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:02.483 request: 00:11:02.483 { 00:11:02.483 "method": "nvmf_create_subsystem", 00:11:02.483 "params": { 00:11:02.483 "nqn": "nqn.2016-06.io.spdk:cnode3686", 00:11:02.483 "model_number": "SPDK_Controller\u001f" 00:11:02.483 } 00:11:02.483 } 00:11:02.483 Got JSON-RPC error response 00:11:02.483 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.483 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:02.740 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.741 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.741 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:02.741 18:36:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:11:02.741 18:36:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'GLr#L'\''G\7HYj0pd8z+ /dev/null' 00:11:06.407 18:36:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.407 18:36:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:06.407 00:11:06.407 real 0m6.510s 00:11:06.407 user 0m25.717s 00:11:06.407 sys 0m1.681s 00:11:06.407 ************************************ 00:11:06.407 END TEST nvmf_invalid 00:11:06.407 ************************************ 00:11:06.407 18:36:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.407 18:36:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:06.407 18:36:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:06.407 18:36:40 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:06.407 18:36:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:06.407 18:36:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.407 18:36:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.665 ************************************ 00:11:06.665 START TEST nvmf_abort 00:11:06.665 ************************************ 00:11:06.665 18:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:06.665 * Looking for test storage... 00:11:06.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.665 18:36:40 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.665 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:06.665 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.665 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.665 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.665 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.666 18:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:06.666 Cannot find device "nvmf_tgt_br" 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.666 Cannot find device "nvmf_tgt_br2" 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:06.666 Cannot find device "nvmf_tgt_br" 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:06.666 Cannot find device "nvmf_tgt_br2" 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:06.666 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:06.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:11:06.924 00:11:06.924 --- 10.0.0.2 ping statistics --- 00:11:06.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.924 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:06.924 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:06.924 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:11:06.924 00:11:06.924 --- 10.0.0.3 ping statistics --- 00:11:06.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.924 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:06.924 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:06.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:06.924 00:11:06.924 --- 10.0.0.1 ping statistics --- 00:11:06.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.924 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68530 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68530 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68530 ']' 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:06.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.925 18:36:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:07.183 [2024-07-15 18:36:41.446667] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:07.183 [2024-07-15 18:36:41.446772] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.183 [2024-07-15 18:36:41.584800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.440 [2024-07-15 18:36:41.700666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.440 [2024-07-15 18:36:41.700723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.440 [2024-07-15 18:36:41.700738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.440 [2024-07-15 18:36:41.700751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.440 [2024-07-15 18:36:41.700762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.440 [2024-07-15 18:36:41.701034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.440 [2024-07-15 18:36:41.702210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.440 [2024-07-15 18:36:41.702220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.039 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.039 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:11:08.039 18:36:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.039 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.039 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.297 [2024-07-15 18:36:42.543111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.297 Malloc0 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.297 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.298 Delay0 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.298 [2024-07-15 18:36:42.614690] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.298 18:36:42 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:08.555 [2024-07-15 18:36:42.795038] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:10.455 Initializing NVMe Controllers 00:11:10.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:10.455 controller IO queue size 128 less than required 00:11:10.455 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:10.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:10.455 Initialization complete. Launching workers. 00:11:10.455 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32476 00:11:10.455 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32537, failed to submit 62 00:11:10.455 success 32480, unsuccess 57, failed 0 00:11:10.455 18:36:44 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:10.455 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.455 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.455 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.456 rmmod nvme_tcp 00:11:10.456 rmmod nvme_fabrics 00:11:10.456 rmmod nvme_keyring 00:11:10.456 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68530 ']' 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68530 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68530 ']' 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68530 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68530 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:10.713 killing process with pid 68530 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68530' 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68530 00:11:10.713 18:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68530 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.713 18:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.970 18:36:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:10.971 00:11:10.971 real 0m4.347s 00:11:10.971 user 0m12.234s 00:11:10.971 sys 0m1.223s 00:11:10.971 18:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.971 18:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.971 ************************************ 00:11:10.971 END TEST nvmf_abort 00:11:10.971 ************************************ 00:11:10.971 18:36:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:10.971 18:36:45 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:10.971 18:36:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.971 18:36:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.971 18:36:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.971 ************************************ 00:11:10.971 START TEST nvmf_ns_hotplug_stress 00:11:10.971 ************************************ 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:10.971 * Looking for test storage... 00:11:10.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:10.971 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:10.972 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:10.972 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:10.972 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:10.972 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:11.228 Cannot find device "nvmf_tgt_br" 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.228 Cannot find device "nvmf_tgt_br2" 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:11.228 Cannot find device "nvmf_tgt_br" 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:11.228 Cannot find device "nvmf_tgt_br2" 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:11.228 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:11.229 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:11.229 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.229 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.229 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.229 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:11.229 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:11.229 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:11.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:11:11.485 00:11:11.485 --- 10.0.0.2 ping statistics --- 00:11:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.485 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:11.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:11:11.485 00:11:11.485 --- 10.0.0.3 ping statistics --- 00:11:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.485 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:11:11.485 00:11:11.485 --- 10.0.0.1 ping statistics --- 00:11:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.485 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68805 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68805 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68805 ']' 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.485 18:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.485 [2024-07-15 18:36:45.893016] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:11.485 [2024-07-15 18:36:45.893184] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.741 [2024-07-15 18:36:46.039376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.741 [2024-07-15 18:36:46.157605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.741 [2024-07-15 18:36:46.157658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.741 [2024-07-15 18:36:46.157673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.741 [2024-07-15 18:36:46.157686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.741 [2024-07-15 18:36:46.157697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.741 [2024-07-15 18:36:46.157918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.741 [2024-07-15 18:36:46.158918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.741 [2024-07-15 18:36:46.158928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:12.670 18:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:12.927 [2024-07-15 18:36:47.243118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.927 18:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:13.185 18:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.442 [2024-07-15 18:36:47.767788] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.442 18:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:13.700 18:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:13.957 Malloc0 00:11:13.957 18:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:13.957 Delay0 00:11:13.957 18:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.214 18:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:14.473 NULL1 00:11:14.473 18:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:14.732 18:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:14.732 18:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68937 00:11:14.732 18:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:14.732 18:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.106 Read completed with error (sct=0, sc=11) 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.106 18:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.364 18:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:16.364 18:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:16.622 true 00:11:16.622 18:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:16.622 18:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.187 18:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.445 18:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:17.445 18:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:18.023 true 00:11:18.023 18:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:18.023 18:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.309 18:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.568 18:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:18.568 18:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:18.568 true 00:11:18.826 18:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:18.826 18:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.826 18:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.085 18:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:19.085 18:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:19.342 true 00:11:19.342 18:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:19.342 18:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.275 18:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.533 18:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:20.533 18:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:20.790 true 00:11:20.790 18:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:20.790 18:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.049 18:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.307 18:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:21.307 18:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:21.307 true 00:11:21.565 18:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:21.565 18:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.500 18:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.500 18:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:22.500 18:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:22.758 true 00:11:22.758 18:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:22.758 18:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.017 18:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.276 18:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:23.276 18:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:23.534 true 00:11:23.534 18:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:23.534 18:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.792 18:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.051 18:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:24.051 18:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:24.309 true 00:11:24.309 18:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:24.309 18:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.311 18:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.597 18:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:25.597 18:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:25.855 true 00:11:25.855 18:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:25.855 18:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.114 18:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.114 18:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:26.114 18:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:26.372 true 00:11:26.372 18:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:26.372 18:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:27.306 18:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.564 18:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:27.564 18:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:27.822 true 00:11:27.822 18:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:27.822 18:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.080 18:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.337 18:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:28.337 18:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:28.595 true 00:11:28.595 18:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:28.595 18:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.853 18:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.135 18:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:29.135 18:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:29.405 true 00:11:29.405 18:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:29.405 18:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.341 18:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.599 18:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:30.599 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:30.857 true 00:11:30.857 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:30.857 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.117 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.426 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:31.426 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:31.426 true 00:11:31.426 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:31.426 18:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.367 18:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.624 18:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:32.624 18:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:32.880 true 00:11:32.880 18:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:32.880 18:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.447 18:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.447 18:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:33.447 18:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:33.704 true 00:11:33.704 18:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:33.704 18:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.962 18:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.220 18:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:34.220 18:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:34.478 true 00:11:34.478 18:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:34.478 18:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.413 18:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.671 18:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:35.671 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:35.929 true 00:11:35.929 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:35.929 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.188 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.188 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:36.188 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:36.446 true 00:11:36.446 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:36.446 18:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.381 18:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.640 18:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:37.640 18:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:37.899 true 00:11:37.899 18:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:37.899 18:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.156 18:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.413 18:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:38.413 18:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:38.698 true 00:11:38.698 18:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:38.698 18:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.628 18:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.628 18:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:39.628 18:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:39.885 true 00:11:39.885 18:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:39.885 18:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.143 18:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.143 18:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:40.143 18:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:40.401 true 00:11:40.401 18:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:40.401 18:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.338 18:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.596 18:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:41.596 18:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:41.854 true 00:11:41.854 18:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:41.854 18:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.113 18:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.371 18:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:42.371 18:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:42.628 true 00:11:42.629 18:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:42.629 18:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.563 18:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.821 18:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:43.821 18:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:44.079 true 00:11:44.079 18:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:44.079 18:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.337 18:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.596 18:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:44.596 18:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:44.855 true 00:11:44.855 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:44.855 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.855 Initializing NVMe Controllers 00:11:44.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:44.855 Controller IO queue size 128, less than required. 00:11:44.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:44.855 Controller IO queue size 128, less than required. 00:11:44.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:44.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:44.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:44.855 Initialization complete. Launching workers. 00:11:44.855 ======================================================== 00:11:44.855 Latency(us) 00:11:44.855 Device Information : IOPS MiB/s Average min max 00:11:44.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 390.07 0.19 154856.93 3103.00 1038780.75 00:11:44.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10257.35 5.01 12479.06 3430.87 530921.86 00:11:44.855 ======================================================== 00:11:44.855 Total : 10647.43 5.20 17695.15 3103.00 1038780.75 00:11:44.855 00:11:45.113 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.113 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:45.113 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:45.371 true 00:11:45.629 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68937 00:11:45.629 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68937) - No such process 00:11:45.629 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68937 00:11:45.629 18:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.629 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:46.196 null0 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.196 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:46.454 null1 00:11:46.454 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:46.454 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.454 18:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:46.712 null2 00:11:46.712 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:46.712 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.712 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:46.968 null3 00:11:46.968 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:46.968 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.968 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:47.226 null4 00:11:47.226 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:47.226 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:47.226 18:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:47.793 null5 00:11:47.793 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:47.793 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:47.793 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:47.793 null6 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:48.051 null7 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:48.051 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69998 69999 70001 70003 70004 70008 70010 70011 00:11:48.310 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:48.593 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:48.593 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:48.593 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:48.593 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:48.593 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:48.593 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.593 18:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:48.593 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.593 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.593 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.850 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.851 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:48.851 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:49.108 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:49.367 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:49.625 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.625 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.625 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:49.625 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:49.626 18:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:49.626 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.626 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:49.626 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:49.626 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:49.626 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.626 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.626 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.884 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.143 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.402 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:50.661 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.661 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.661 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.661 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:50.661 18:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:50.661 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:50.661 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:50.661 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.661 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.661 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.920 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.179 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.180 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.439 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:51.697 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.697 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.697 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:51.697 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.697 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.697 18:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:51.697 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:51.697 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.697 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.697 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:51.697 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:51.957 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.958 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.958 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.958 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:51.958 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.958 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.958 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:52.216 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:52.475 18:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:52.733 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:52.991 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.250 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.508 18:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.766 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.025 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.025 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:54.025 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.025 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:54.025 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.025 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:54.025 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.283 rmmod nvme_tcp 00:11:54.283 rmmod nvme_fabrics 00:11:54.283 rmmod nvme_keyring 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68805 ']' 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68805 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68805 ']' 00:11:54.283 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68805 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68805 00:11:54.284 killing process with pid 68805 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68805' 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68805 00:11:54.284 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68805 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:54.542 00:11:54.542 real 0m43.666s 00:11:54.542 user 3m26.822s 00:11:54.542 sys 0m16.558s 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.542 18:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.542 ************************************ 00:11:54.542 END TEST nvmf_ns_hotplug_stress 00:11:54.542 ************************************ 00:11:54.542 18:37:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:54.542 18:37:29 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:54.542 18:37:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:54.542 18:37:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.542 18:37:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:54.802 ************************************ 00:11:54.802 START TEST nvmf_connect_stress 00:11:54.802 ************************************ 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:54.802 * Looking for test storage... 00:11:54.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.802 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:54.803 Cannot find device "nvmf_tgt_br" 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.803 Cannot find device "nvmf_tgt_br2" 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:54.803 Cannot find device "nvmf_tgt_br" 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:54.803 Cannot find device "nvmf_tgt_br2" 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:11:54.803 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:55.061 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.062 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:55.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:11:55.321 00:11:55.321 --- 10.0.0.2 ping statistics --- 00:11:55.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.321 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:55.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:11:55.321 00:11:55.321 --- 10.0.0.3 ping statistics --- 00:11:55.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.321 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:55.321 00:11:55.321 --- 10.0.0.1 ping statistics --- 00:11:55.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.321 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71342 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71342 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71342 ']' 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.321 18:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.321 [2024-07-15 18:37:29.651166] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:55.321 [2024-07-15 18:37:29.651290] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.321 [2024-07-15 18:37:29.795235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.580 [2024-07-15 18:37:29.968905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.580 [2024-07-15 18:37:29.968991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.580 [2024-07-15 18:37:29.969007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.580 [2024-07-15 18:37:29.969021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.580 [2024-07-15 18:37:29.969032] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.580 [2024-07-15 18:37:29.969144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.580 [2024-07-15 18:37:29.970306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.580 [2024-07-15 18:37:29.970321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.516 [2024-07-15 18:37:30.696466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.516 [2024-07-15 18:37:30.716648] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.516 NULL1 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71400 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.516 18:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.775 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.775 18:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:56.775 18:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.775 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.775 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.034 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.034 18:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:57.034 18:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.034 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.034 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.600 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.600 18:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:57.600 18:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.600 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.600 18:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.858 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.858 18:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:57.858 18:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.858 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.858 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.115 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.115 18:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:58.115 18:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.115 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.115 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.372 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.372 18:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:58.372 18:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.372 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.372 18:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.630 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.630 18:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:58.630 18:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.630 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.630 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.194 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.194 18:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:59.194 18:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.194 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.194 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.452 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.452 18:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:59.452 18:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.452 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.452 18:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.709 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.709 18:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:59.709 18:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.709 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.709 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.966 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.966 18:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:11:59.966 18:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.966 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.966 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.268 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.268 18:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:00.268 18:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.268 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.268 18:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.833 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.833 18:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:00.833 18:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.833 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.833 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.091 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.091 18:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:01.091 18:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.091 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.091 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.348 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.348 18:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:01.348 18:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.348 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.348 18:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.606 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.606 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:01.606 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.606 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.606 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.862 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.862 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:01.862 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.862 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.862 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.426 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.426 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:02.426 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.426 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.426 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.684 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.684 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:02.684 18:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.684 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.684 18:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.941 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.941 18:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:02.941 18:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.941 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.941 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.198 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.198 18:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:03.198 18:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.198 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.198 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.456 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.456 18:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:03.456 18:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.456 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.456 18:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.020 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.020 18:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:04.020 18:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.020 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.020 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.278 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.278 18:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:04.278 18:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.278 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.278 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.537 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.537 18:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:04.537 18:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.537 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.537 18:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.796 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.796 18:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:04.796 18:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.796 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.796 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.361 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.361 18:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:05.361 18:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.361 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.361 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.617 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.617 18:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:05.617 18:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.617 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.617 18:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.876 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.876 18:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:05.876 18:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.876 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.876 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.134 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.134 18:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:06.134 18:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.134 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.134 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.392 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.392 18:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:06.392 18:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.392 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.392 18:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.652 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71400 00:12:06.910 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71400) - No such process 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71400 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.910 rmmod nvme_tcp 00:12:06.910 rmmod nvme_fabrics 00:12:06.910 rmmod nvme_keyring 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71342 ']' 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71342 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71342 ']' 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71342 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71342 00:12:06.910 killing process with pid 71342 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71342' 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71342 00:12:06.910 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71342 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:07.167 00:12:07.167 real 0m12.539s 00:12:07.167 user 0m40.022s 00:12:07.167 sys 0m4.565s 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.167 18:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.167 ************************************ 00:12:07.167 END TEST nvmf_connect_stress 00:12:07.167 ************************************ 00:12:07.167 18:37:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:07.167 18:37:41 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:07.167 18:37:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.167 18:37:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.167 18:37:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.167 ************************************ 00:12:07.167 START TEST nvmf_fused_ordering 00:12:07.167 ************************************ 00:12:07.167 18:37:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:07.425 * Looking for test storage... 00:12:07.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.425 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:07.426 Cannot find device "nvmf_tgt_br" 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.426 Cannot find device "nvmf_tgt_br2" 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:07.426 Cannot find device "nvmf_tgt_br" 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:07.426 Cannot find device "nvmf_tgt_br2" 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:12:07.426 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:07.683 18:37:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:07.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:12:07.683 00:12:07.683 --- 10.0.0.2 ping statistics --- 00:12:07.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.683 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:07.683 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:07.683 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:12:07.683 00:12:07.683 --- 10.0.0.3 ping statistics --- 00:12:07.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.683 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:07.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:07.683 00:12:07.683 --- 10.0.0.1 ping statistics --- 00:12:07.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.683 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71728 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71728 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71728 ']' 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.683 18:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:07.940 [2024-07-15 18:37:42.206156] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:07.940 [2024-07-15 18:37:42.206259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.940 [2024-07-15 18:37:42.352325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.199 [2024-07-15 18:37:42.468589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.199 [2024-07-15 18:37:42.468654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.199 [2024-07-15 18:37:42.468670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.199 [2024-07-15 18:37:42.468683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.199 [2024-07-15 18:37:42.468693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.199 [2024-07-15 18:37:42.468734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.764 [2024-07-15 18:37:43.227618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.764 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.764 [2024-07-15 18:37:43.243742] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 NULL1 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.021 18:37:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:09.021 [2024-07-15 18:37:43.298084] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:09.021 [2024-07-15 18:37:43.298141] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71778 ] 00:12:09.278 Attached to nqn.2016-06.io.spdk:cnode1 00:12:09.278 Namespace ID: 1 size: 1GB 00:12:09.278 fused_ordering(0) 00:12:09.278 fused_ordering(1) 00:12:09.278 fused_ordering(2) 00:12:09.278 fused_ordering(3) 00:12:09.278 fused_ordering(4) 00:12:09.278 fused_ordering(5) 00:12:09.278 fused_ordering(6) 00:12:09.278 fused_ordering(7) 00:12:09.278 fused_ordering(8) 00:12:09.278 fused_ordering(9) 00:12:09.278 fused_ordering(10) 00:12:09.278 fused_ordering(11) 00:12:09.278 fused_ordering(12) 00:12:09.278 fused_ordering(13) 00:12:09.278 fused_ordering(14) 00:12:09.278 fused_ordering(15) 00:12:09.278 fused_ordering(16) 00:12:09.278 fused_ordering(17) 00:12:09.278 fused_ordering(18) 00:12:09.278 fused_ordering(19) 00:12:09.278 fused_ordering(20) 00:12:09.278 fused_ordering(21) 00:12:09.278 fused_ordering(22) 00:12:09.278 fused_ordering(23) 00:12:09.278 fused_ordering(24) 00:12:09.278 fused_ordering(25) 00:12:09.278 fused_ordering(26) 00:12:09.278 fused_ordering(27) 00:12:09.278 fused_ordering(28) 00:12:09.279 fused_ordering(29) 00:12:09.279 fused_ordering(30) 00:12:09.279 fused_ordering(31) 00:12:09.279 fused_ordering(32) 00:12:09.279 fused_ordering(33) 00:12:09.279 fused_ordering(34) 00:12:09.279 fused_ordering(35) 00:12:09.279 fused_ordering(36) 00:12:09.279 fused_ordering(37) 00:12:09.279 fused_ordering(38) 00:12:09.279 fused_ordering(39) 00:12:09.279 fused_ordering(40) 00:12:09.279 fused_ordering(41) 00:12:09.279 fused_ordering(42) 00:12:09.279 fused_ordering(43) 00:12:09.279 fused_ordering(44) 00:12:09.279 fused_ordering(45) 00:12:09.279 fused_ordering(46) 00:12:09.279 fused_ordering(47) 00:12:09.279 fused_ordering(48) 00:12:09.279 fused_ordering(49) 00:12:09.279 fused_ordering(50) 00:12:09.279 fused_ordering(51) 00:12:09.279 fused_ordering(52) 00:12:09.279 fused_ordering(53) 00:12:09.279 fused_ordering(54) 00:12:09.279 fused_ordering(55) 00:12:09.279 fused_ordering(56) 00:12:09.279 fused_ordering(57) 00:12:09.279 fused_ordering(58) 00:12:09.279 fused_ordering(59) 00:12:09.279 fused_ordering(60) 00:12:09.279 fused_ordering(61) 00:12:09.279 fused_ordering(62) 00:12:09.279 fused_ordering(63) 00:12:09.279 fused_ordering(64) 00:12:09.279 fused_ordering(65) 00:12:09.279 fused_ordering(66) 00:12:09.279 fused_ordering(67) 00:12:09.279 fused_ordering(68) 00:12:09.279 fused_ordering(69) 00:12:09.279 fused_ordering(70) 00:12:09.279 fused_ordering(71) 00:12:09.279 fused_ordering(72) 00:12:09.279 fused_ordering(73) 00:12:09.279 fused_ordering(74) 00:12:09.279 fused_ordering(75) 00:12:09.279 fused_ordering(76) 00:12:09.279 fused_ordering(77) 00:12:09.279 fused_ordering(78) 00:12:09.279 fused_ordering(79) 00:12:09.279 fused_ordering(80) 00:12:09.279 fused_ordering(81) 00:12:09.279 fused_ordering(82) 00:12:09.279 fused_ordering(83) 00:12:09.279 fused_ordering(84) 00:12:09.279 fused_ordering(85) 00:12:09.279 fused_ordering(86) 00:12:09.279 fused_ordering(87) 00:12:09.279 fused_ordering(88) 00:12:09.279 fused_ordering(89) 00:12:09.279 fused_ordering(90) 00:12:09.279 fused_ordering(91) 00:12:09.279 fused_ordering(92) 00:12:09.279 fused_ordering(93) 00:12:09.279 fused_ordering(94) 00:12:09.279 fused_ordering(95) 00:12:09.279 fused_ordering(96) 00:12:09.279 fused_ordering(97) 00:12:09.279 fused_ordering(98) 00:12:09.279 fused_ordering(99) 00:12:09.279 fused_ordering(100) 00:12:09.279 fused_ordering(101) 00:12:09.279 fused_ordering(102) 00:12:09.279 fused_ordering(103) 00:12:09.279 fused_ordering(104) 00:12:09.279 fused_ordering(105) 00:12:09.279 fused_ordering(106) 00:12:09.279 fused_ordering(107) 00:12:09.279 fused_ordering(108) 00:12:09.279 fused_ordering(109) 00:12:09.279 fused_ordering(110) 00:12:09.279 fused_ordering(111) 00:12:09.279 fused_ordering(112) 00:12:09.279 fused_ordering(113) 00:12:09.279 fused_ordering(114) 00:12:09.279 fused_ordering(115) 00:12:09.279 fused_ordering(116) 00:12:09.279 fused_ordering(117) 00:12:09.279 fused_ordering(118) 00:12:09.279 fused_ordering(119) 00:12:09.279 fused_ordering(120) 00:12:09.279 fused_ordering(121) 00:12:09.279 fused_ordering(122) 00:12:09.279 fused_ordering(123) 00:12:09.279 fused_ordering(124) 00:12:09.279 fused_ordering(125) 00:12:09.279 fused_ordering(126) 00:12:09.279 fused_ordering(127) 00:12:09.279 fused_ordering(128) 00:12:09.279 fused_ordering(129) 00:12:09.279 fused_ordering(130) 00:12:09.279 fused_ordering(131) 00:12:09.279 fused_ordering(132) 00:12:09.279 fused_ordering(133) 00:12:09.279 fused_ordering(134) 00:12:09.279 fused_ordering(135) 00:12:09.279 fused_ordering(136) 00:12:09.279 fused_ordering(137) 00:12:09.279 fused_ordering(138) 00:12:09.279 fused_ordering(139) 00:12:09.279 fused_ordering(140) 00:12:09.279 fused_ordering(141) 00:12:09.279 fused_ordering(142) 00:12:09.279 fused_ordering(143) 00:12:09.279 fused_ordering(144) 00:12:09.279 fused_ordering(145) 00:12:09.279 fused_ordering(146) 00:12:09.279 fused_ordering(147) 00:12:09.279 fused_ordering(148) 00:12:09.279 fused_ordering(149) 00:12:09.279 fused_ordering(150) 00:12:09.279 fused_ordering(151) 00:12:09.279 fused_ordering(152) 00:12:09.279 fused_ordering(153) 00:12:09.279 fused_ordering(154) 00:12:09.279 fused_ordering(155) 00:12:09.279 fused_ordering(156) 00:12:09.279 fused_ordering(157) 00:12:09.279 fused_ordering(158) 00:12:09.279 fused_ordering(159) 00:12:09.279 fused_ordering(160) 00:12:09.279 fused_ordering(161) 00:12:09.279 fused_ordering(162) 00:12:09.279 fused_ordering(163) 00:12:09.279 fused_ordering(164) 00:12:09.279 fused_ordering(165) 00:12:09.279 fused_ordering(166) 00:12:09.279 fused_ordering(167) 00:12:09.279 fused_ordering(168) 00:12:09.279 fused_ordering(169) 00:12:09.279 fused_ordering(170) 00:12:09.279 fused_ordering(171) 00:12:09.279 fused_ordering(172) 00:12:09.279 fused_ordering(173) 00:12:09.279 fused_ordering(174) 00:12:09.279 fused_ordering(175) 00:12:09.279 fused_ordering(176) 00:12:09.279 fused_ordering(177) 00:12:09.279 fused_ordering(178) 00:12:09.279 fused_ordering(179) 00:12:09.279 fused_ordering(180) 00:12:09.279 fused_ordering(181) 00:12:09.279 fused_ordering(182) 00:12:09.279 fused_ordering(183) 00:12:09.279 fused_ordering(184) 00:12:09.279 fused_ordering(185) 00:12:09.279 fused_ordering(186) 00:12:09.279 fused_ordering(187) 00:12:09.279 fused_ordering(188) 00:12:09.279 fused_ordering(189) 00:12:09.279 fused_ordering(190) 00:12:09.279 fused_ordering(191) 00:12:09.279 fused_ordering(192) 00:12:09.279 fused_ordering(193) 00:12:09.279 fused_ordering(194) 00:12:09.279 fused_ordering(195) 00:12:09.279 fused_ordering(196) 00:12:09.279 fused_ordering(197) 00:12:09.279 fused_ordering(198) 00:12:09.279 fused_ordering(199) 00:12:09.279 fused_ordering(200) 00:12:09.279 fused_ordering(201) 00:12:09.279 fused_ordering(202) 00:12:09.279 fused_ordering(203) 00:12:09.279 fused_ordering(204) 00:12:09.279 fused_ordering(205) 00:12:09.562 fused_ordering(206) 00:12:09.562 fused_ordering(207) 00:12:09.562 fused_ordering(208) 00:12:09.562 fused_ordering(209) 00:12:09.562 fused_ordering(210) 00:12:09.562 fused_ordering(211) 00:12:09.562 fused_ordering(212) 00:12:09.562 fused_ordering(213) 00:12:09.562 fused_ordering(214) 00:12:09.562 fused_ordering(215) 00:12:09.562 fused_ordering(216) 00:12:09.562 fused_ordering(217) 00:12:09.562 fused_ordering(218) 00:12:09.562 fused_ordering(219) 00:12:09.562 fused_ordering(220) 00:12:09.562 fused_ordering(221) 00:12:09.562 fused_ordering(222) 00:12:09.562 fused_ordering(223) 00:12:09.562 fused_ordering(224) 00:12:09.562 fused_ordering(225) 00:12:09.562 fused_ordering(226) 00:12:09.562 fused_ordering(227) 00:12:09.562 fused_ordering(228) 00:12:09.562 fused_ordering(229) 00:12:09.562 fused_ordering(230) 00:12:09.562 fused_ordering(231) 00:12:09.562 fused_ordering(232) 00:12:09.562 fused_ordering(233) 00:12:09.562 fused_ordering(234) 00:12:09.562 fused_ordering(235) 00:12:09.562 fused_ordering(236) 00:12:09.562 fused_ordering(237) 00:12:09.562 fused_ordering(238) 00:12:09.562 fused_ordering(239) 00:12:09.562 fused_ordering(240) 00:12:09.562 fused_ordering(241) 00:12:09.562 fused_ordering(242) 00:12:09.562 fused_ordering(243) 00:12:09.562 fused_ordering(244) 00:12:09.562 fused_ordering(245) 00:12:09.562 fused_ordering(246) 00:12:09.562 fused_ordering(247) 00:12:09.562 fused_ordering(248) 00:12:09.562 fused_ordering(249) 00:12:09.562 fused_ordering(250) 00:12:09.562 fused_ordering(251) 00:12:09.562 fused_ordering(252) 00:12:09.562 fused_ordering(253) 00:12:09.562 fused_ordering(254) 00:12:09.562 fused_ordering(255) 00:12:09.562 fused_ordering(256) 00:12:09.562 fused_ordering(257) 00:12:09.562 fused_ordering(258) 00:12:09.562 fused_ordering(259) 00:12:09.562 fused_ordering(260) 00:12:09.562 fused_ordering(261) 00:12:09.562 fused_ordering(262) 00:12:09.562 fused_ordering(263) 00:12:09.562 fused_ordering(264) 00:12:09.562 fused_ordering(265) 00:12:09.562 fused_ordering(266) 00:12:09.562 fused_ordering(267) 00:12:09.562 fused_ordering(268) 00:12:09.562 fused_ordering(269) 00:12:09.562 fused_ordering(270) 00:12:09.562 fused_ordering(271) 00:12:09.562 fused_ordering(272) 00:12:09.562 fused_ordering(273) 00:12:09.562 fused_ordering(274) 00:12:09.562 fused_ordering(275) 00:12:09.562 fused_ordering(276) 00:12:09.562 fused_ordering(277) 00:12:09.562 fused_ordering(278) 00:12:09.562 fused_ordering(279) 00:12:09.562 fused_ordering(280) 00:12:09.562 fused_ordering(281) 00:12:09.562 fused_ordering(282) 00:12:09.562 fused_ordering(283) 00:12:09.562 fused_ordering(284) 00:12:09.562 fused_ordering(285) 00:12:09.562 fused_ordering(286) 00:12:09.562 fused_ordering(287) 00:12:09.562 fused_ordering(288) 00:12:09.562 fused_ordering(289) 00:12:09.562 fused_ordering(290) 00:12:09.562 fused_ordering(291) 00:12:09.562 fused_ordering(292) 00:12:09.562 fused_ordering(293) 00:12:09.562 fused_ordering(294) 00:12:09.562 fused_ordering(295) 00:12:09.562 fused_ordering(296) 00:12:09.562 fused_ordering(297) 00:12:09.562 fused_ordering(298) 00:12:09.562 fused_ordering(299) 00:12:09.562 fused_ordering(300) 00:12:09.562 fused_ordering(301) 00:12:09.562 fused_ordering(302) 00:12:09.562 fused_ordering(303) 00:12:09.562 fused_ordering(304) 00:12:09.562 fused_ordering(305) 00:12:09.562 fused_ordering(306) 00:12:09.562 fused_ordering(307) 00:12:09.562 fused_ordering(308) 00:12:09.562 fused_ordering(309) 00:12:09.562 fused_ordering(310) 00:12:09.562 fused_ordering(311) 00:12:09.562 fused_ordering(312) 00:12:09.562 fused_ordering(313) 00:12:09.562 fused_ordering(314) 00:12:09.562 fused_ordering(315) 00:12:09.562 fused_ordering(316) 00:12:09.562 fused_ordering(317) 00:12:09.562 fused_ordering(318) 00:12:09.562 fused_ordering(319) 00:12:09.562 fused_ordering(320) 00:12:09.562 fused_ordering(321) 00:12:09.562 fused_ordering(322) 00:12:09.562 fused_ordering(323) 00:12:09.562 fused_ordering(324) 00:12:09.562 fused_ordering(325) 00:12:09.562 fused_ordering(326) 00:12:09.562 fused_ordering(327) 00:12:09.562 fused_ordering(328) 00:12:09.562 fused_ordering(329) 00:12:09.562 fused_ordering(330) 00:12:09.562 fused_ordering(331) 00:12:09.562 fused_ordering(332) 00:12:09.562 fused_ordering(333) 00:12:09.562 fused_ordering(334) 00:12:09.562 fused_ordering(335) 00:12:09.562 fused_ordering(336) 00:12:09.562 fused_ordering(337) 00:12:09.562 fused_ordering(338) 00:12:09.562 fused_ordering(339) 00:12:09.562 fused_ordering(340) 00:12:09.562 fused_ordering(341) 00:12:09.562 fused_ordering(342) 00:12:09.562 fused_ordering(343) 00:12:09.562 fused_ordering(344) 00:12:09.562 fused_ordering(345) 00:12:09.562 fused_ordering(346) 00:12:09.562 fused_ordering(347) 00:12:09.562 fused_ordering(348) 00:12:09.562 fused_ordering(349) 00:12:09.562 fused_ordering(350) 00:12:09.562 fused_ordering(351) 00:12:09.562 fused_ordering(352) 00:12:09.562 fused_ordering(353) 00:12:09.562 fused_ordering(354) 00:12:09.562 fused_ordering(355) 00:12:09.562 fused_ordering(356) 00:12:09.562 fused_ordering(357) 00:12:09.562 fused_ordering(358) 00:12:09.562 fused_ordering(359) 00:12:09.562 fused_ordering(360) 00:12:09.562 fused_ordering(361) 00:12:09.562 fused_ordering(362) 00:12:09.562 fused_ordering(363) 00:12:09.562 fused_ordering(364) 00:12:09.562 fused_ordering(365) 00:12:09.562 fused_ordering(366) 00:12:09.562 fused_ordering(367) 00:12:09.562 fused_ordering(368) 00:12:09.562 fused_ordering(369) 00:12:09.562 fused_ordering(370) 00:12:09.562 fused_ordering(371) 00:12:09.562 fused_ordering(372) 00:12:09.562 fused_ordering(373) 00:12:09.562 fused_ordering(374) 00:12:09.562 fused_ordering(375) 00:12:09.562 fused_ordering(376) 00:12:09.562 fused_ordering(377) 00:12:09.562 fused_ordering(378) 00:12:09.562 fused_ordering(379) 00:12:09.562 fused_ordering(380) 00:12:09.562 fused_ordering(381) 00:12:09.562 fused_ordering(382) 00:12:09.562 fused_ordering(383) 00:12:09.562 fused_ordering(384) 00:12:09.562 fused_ordering(385) 00:12:09.562 fused_ordering(386) 00:12:09.562 fused_ordering(387) 00:12:09.562 fused_ordering(388) 00:12:09.562 fused_ordering(389) 00:12:09.562 fused_ordering(390) 00:12:09.562 fused_ordering(391) 00:12:09.562 fused_ordering(392) 00:12:09.562 fused_ordering(393) 00:12:09.562 fused_ordering(394) 00:12:09.562 fused_ordering(395) 00:12:09.562 fused_ordering(396) 00:12:09.562 fused_ordering(397) 00:12:09.562 fused_ordering(398) 00:12:09.562 fused_ordering(399) 00:12:09.562 fused_ordering(400) 00:12:09.562 fused_ordering(401) 00:12:09.562 fused_ordering(402) 00:12:09.562 fused_ordering(403) 00:12:09.562 fused_ordering(404) 00:12:09.562 fused_ordering(405) 00:12:09.562 fused_ordering(406) 00:12:09.562 fused_ordering(407) 00:12:09.562 fused_ordering(408) 00:12:09.562 fused_ordering(409) 00:12:09.562 fused_ordering(410) 00:12:10.128 fused_ordering(411) 00:12:10.128 fused_ordering(412) 00:12:10.128 fused_ordering(413) 00:12:10.128 fused_ordering(414) 00:12:10.128 fused_ordering(415) 00:12:10.128 fused_ordering(416) 00:12:10.128 fused_ordering(417) 00:12:10.128 fused_ordering(418) 00:12:10.128 fused_ordering(419) 00:12:10.128 fused_ordering(420) 00:12:10.128 fused_ordering(421) 00:12:10.128 fused_ordering(422) 00:12:10.128 fused_ordering(423) 00:12:10.128 fused_ordering(424) 00:12:10.128 fused_ordering(425) 00:12:10.128 fused_ordering(426) 00:12:10.128 fused_ordering(427) 00:12:10.128 fused_ordering(428) 00:12:10.128 fused_ordering(429) 00:12:10.128 fused_ordering(430) 00:12:10.128 fused_ordering(431) 00:12:10.128 fused_ordering(432) 00:12:10.128 fused_ordering(433) 00:12:10.128 fused_ordering(434) 00:12:10.128 fused_ordering(435) 00:12:10.128 fused_ordering(436) 00:12:10.128 fused_ordering(437) 00:12:10.128 fused_ordering(438) 00:12:10.128 fused_ordering(439) 00:12:10.128 fused_ordering(440) 00:12:10.128 fused_ordering(441) 00:12:10.128 fused_ordering(442) 00:12:10.128 fused_ordering(443) 00:12:10.128 fused_ordering(444) 00:12:10.128 fused_ordering(445) 00:12:10.128 fused_ordering(446) 00:12:10.128 fused_ordering(447) 00:12:10.128 fused_ordering(448) 00:12:10.128 fused_ordering(449) 00:12:10.128 fused_ordering(450) 00:12:10.128 fused_ordering(451) 00:12:10.128 fused_ordering(452) 00:12:10.128 fused_ordering(453) 00:12:10.128 fused_ordering(454) 00:12:10.128 fused_ordering(455) 00:12:10.128 fused_ordering(456) 00:12:10.128 fused_ordering(457) 00:12:10.128 fused_ordering(458) 00:12:10.128 fused_ordering(459) 00:12:10.128 fused_ordering(460) 00:12:10.128 fused_ordering(461) 00:12:10.128 fused_ordering(462) 00:12:10.128 fused_ordering(463) 00:12:10.128 fused_ordering(464) 00:12:10.128 fused_ordering(465) 00:12:10.128 fused_ordering(466) 00:12:10.128 fused_ordering(467) 00:12:10.128 fused_ordering(468) 00:12:10.128 fused_ordering(469) 00:12:10.128 fused_ordering(470) 00:12:10.128 fused_ordering(471) 00:12:10.128 fused_ordering(472) 00:12:10.128 fused_ordering(473) 00:12:10.128 fused_ordering(474) 00:12:10.128 fused_ordering(475) 00:12:10.128 fused_ordering(476) 00:12:10.128 fused_ordering(477) 00:12:10.128 fused_ordering(478) 00:12:10.128 fused_ordering(479) 00:12:10.128 fused_ordering(480) 00:12:10.128 fused_ordering(481) 00:12:10.128 fused_ordering(482) 00:12:10.128 fused_ordering(483) 00:12:10.128 fused_ordering(484) 00:12:10.128 fused_ordering(485) 00:12:10.128 fused_ordering(486) 00:12:10.128 fused_ordering(487) 00:12:10.128 fused_ordering(488) 00:12:10.128 fused_ordering(489) 00:12:10.128 fused_ordering(490) 00:12:10.128 fused_ordering(491) 00:12:10.128 fused_ordering(492) 00:12:10.128 fused_ordering(493) 00:12:10.128 fused_ordering(494) 00:12:10.128 fused_ordering(495) 00:12:10.128 fused_ordering(496) 00:12:10.128 fused_ordering(497) 00:12:10.128 fused_ordering(498) 00:12:10.128 fused_ordering(499) 00:12:10.128 fused_ordering(500) 00:12:10.128 fused_ordering(501) 00:12:10.128 fused_ordering(502) 00:12:10.129 fused_ordering(503) 00:12:10.129 fused_ordering(504) 00:12:10.129 fused_ordering(505) 00:12:10.129 fused_ordering(506) 00:12:10.129 fused_ordering(507) 00:12:10.129 fused_ordering(508) 00:12:10.129 fused_ordering(509) 00:12:10.129 fused_ordering(510) 00:12:10.129 fused_ordering(511) 00:12:10.129 fused_ordering(512) 00:12:10.129 fused_ordering(513) 00:12:10.129 fused_ordering(514) 00:12:10.129 fused_ordering(515) 00:12:10.129 fused_ordering(516) 00:12:10.129 fused_ordering(517) 00:12:10.129 fused_ordering(518) 00:12:10.129 fused_ordering(519) 00:12:10.129 fused_ordering(520) 00:12:10.129 fused_ordering(521) 00:12:10.129 fused_ordering(522) 00:12:10.129 fused_ordering(523) 00:12:10.129 fused_ordering(524) 00:12:10.129 fused_ordering(525) 00:12:10.129 fused_ordering(526) 00:12:10.129 fused_ordering(527) 00:12:10.129 fused_ordering(528) 00:12:10.129 fused_ordering(529) 00:12:10.129 fused_ordering(530) 00:12:10.129 fused_ordering(531) 00:12:10.129 fused_ordering(532) 00:12:10.129 fused_ordering(533) 00:12:10.129 fused_ordering(534) 00:12:10.129 fused_ordering(535) 00:12:10.129 fused_ordering(536) 00:12:10.129 fused_ordering(537) 00:12:10.129 fused_ordering(538) 00:12:10.129 fused_ordering(539) 00:12:10.129 fused_ordering(540) 00:12:10.129 fused_ordering(541) 00:12:10.129 fused_ordering(542) 00:12:10.129 fused_ordering(543) 00:12:10.129 fused_ordering(544) 00:12:10.129 fused_ordering(545) 00:12:10.129 fused_ordering(546) 00:12:10.129 fused_ordering(547) 00:12:10.129 fused_ordering(548) 00:12:10.129 fused_ordering(549) 00:12:10.129 fused_ordering(550) 00:12:10.129 fused_ordering(551) 00:12:10.129 fused_ordering(552) 00:12:10.129 fused_ordering(553) 00:12:10.129 fused_ordering(554) 00:12:10.129 fused_ordering(555) 00:12:10.129 fused_ordering(556) 00:12:10.129 fused_ordering(557) 00:12:10.129 fused_ordering(558) 00:12:10.129 fused_ordering(559) 00:12:10.129 fused_ordering(560) 00:12:10.129 fused_ordering(561) 00:12:10.129 fused_ordering(562) 00:12:10.129 fused_ordering(563) 00:12:10.129 fused_ordering(564) 00:12:10.129 fused_ordering(565) 00:12:10.129 fused_ordering(566) 00:12:10.129 fused_ordering(567) 00:12:10.129 fused_ordering(568) 00:12:10.129 fused_ordering(569) 00:12:10.129 fused_ordering(570) 00:12:10.129 fused_ordering(571) 00:12:10.129 fused_ordering(572) 00:12:10.129 fused_ordering(573) 00:12:10.129 fused_ordering(574) 00:12:10.129 fused_ordering(575) 00:12:10.129 fused_ordering(576) 00:12:10.129 fused_ordering(577) 00:12:10.129 fused_ordering(578) 00:12:10.129 fused_ordering(579) 00:12:10.129 fused_ordering(580) 00:12:10.129 fused_ordering(581) 00:12:10.129 fused_ordering(582) 00:12:10.129 fused_ordering(583) 00:12:10.129 fused_ordering(584) 00:12:10.129 fused_ordering(585) 00:12:10.129 fused_ordering(586) 00:12:10.129 fused_ordering(587) 00:12:10.129 fused_ordering(588) 00:12:10.129 fused_ordering(589) 00:12:10.129 fused_ordering(590) 00:12:10.129 fused_ordering(591) 00:12:10.129 fused_ordering(592) 00:12:10.129 fused_ordering(593) 00:12:10.129 fused_ordering(594) 00:12:10.129 fused_ordering(595) 00:12:10.129 fused_ordering(596) 00:12:10.129 fused_ordering(597) 00:12:10.129 fused_ordering(598) 00:12:10.129 fused_ordering(599) 00:12:10.129 fused_ordering(600) 00:12:10.129 fused_ordering(601) 00:12:10.129 fused_ordering(602) 00:12:10.129 fused_ordering(603) 00:12:10.129 fused_ordering(604) 00:12:10.129 fused_ordering(605) 00:12:10.129 fused_ordering(606) 00:12:10.129 fused_ordering(607) 00:12:10.129 fused_ordering(608) 00:12:10.129 fused_ordering(609) 00:12:10.129 fused_ordering(610) 00:12:10.129 fused_ordering(611) 00:12:10.129 fused_ordering(612) 00:12:10.129 fused_ordering(613) 00:12:10.129 fused_ordering(614) 00:12:10.129 fused_ordering(615) 00:12:10.387 fused_ordering(616) 00:12:10.387 fused_ordering(617) 00:12:10.387 fused_ordering(618) 00:12:10.387 fused_ordering(619) 00:12:10.387 fused_ordering(620) 00:12:10.387 fused_ordering(621) 00:12:10.387 fused_ordering(622) 00:12:10.387 fused_ordering(623) 00:12:10.387 fused_ordering(624) 00:12:10.387 fused_ordering(625) 00:12:10.387 fused_ordering(626) 00:12:10.387 fused_ordering(627) 00:12:10.387 fused_ordering(628) 00:12:10.387 fused_ordering(629) 00:12:10.387 fused_ordering(630) 00:12:10.387 fused_ordering(631) 00:12:10.387 fused_ordering(632) 00:12:10.387 fused_ordering(633) 00:12:10.387 fused_ordering(634) 00:12:10.387 fused_ordering(635) 00:12:10.387 fused_ordering(636) 00:12:10.387 fused_ordering(637) 00:12:10.387 fused_ordering(638) 00:12:10.387 fused_ordering(639) 00:12:10.387 fused_ordering(640) 00:12:10.387 fused_ordering(641) 00:12:10.387 fused_ordering(642) 00:12:10.387 fused_ordering(643) 00:12:10.387 fused_ordering(644) 00:12:10.387 fused_ordering(645) 00:12:10.387 fused_ordering(646) 00:12:10.387 fused_ordering(647) 00:12:10.387 fused_ordering(648) 00:12:10.387 fused_ordering(649) 00:12:10.387 fused_ordering(650) 00:12:10.387 fused_ordering(651) 00:12:10.387 fused_ordering(652) 00:12:10.387 fused_ordering(653) 00:12:10.387 fused_ordering(654) 00:12:10.387 fused_ordering(655) 00:12:10.387 fused_ordering(656) 00:12:10.387 fused_ordering(657) 00:12:10.387 fused_ordering(658) 00:12:10.387 fused_ordering(659) 00:12:10.387 fused_ordering(660) 00:12:10.387 fused_ordering(661) 00:12:10.387 fused_ordering(662) 00:12:10.387 fused_ordering(663) 00:12:10.387 fused_ordering(664) 00:12:10.387 fused_ordering(665) 00:12:10.387 fused_ordering(666) 00:12:10.387 fused_ordering(667) 00:12:10.387 fused_ordering(668) 00:12:10.387 fused_ordering(669) 00:12:10.387 fused_ordering(670) 00:12:10.387 fused_ordering(671) 00:12:10.387 fused_ordering(672) 00:12:10.387 fused_ordering(673) 00:12:10.387 fused_ordering(674) 00:12:10.387 fused_ordering(675) 00:12:10.387 fused_ordering(676) 00:12:10.387 fused_ordering(677) 00:12:10.387 fused_ordering(678) 00:12:10.387 fused_ordering(679) 00:12:10.387 fused_ordering(680) 00:12:10.387 fused_ordering(681) 00:12:10.387 fused_ordering(682) 00:12:10.387 fused_ordering(683) 00:12:10.387 fused_ordering(684) 00:12:10.387 fused_ordering(685) 00:12:10.387 fused_ordering(686) 00:12:10.387 fused_ordering(687) 00:12:10.387 fused_ordering(688) 00:12:10.387 fused_ordering(689) 00:12:10.387 fused_ordering(690) 00:12:10.387 fused_ordering(691) 00:12:10.387 fused_ordering(692) 00:12:10.387 fused_ordering(693) 00:12:10.387 fused_ordering(694) 00:12:10.387 fused_ordering(695) 00:12:10.387 fused_ordering(696) 00:12:10.387 fused_ordering(697) 00:12:10.387 fused_ordering(698) 00:12:10.387 fused_ordering(699) 00:12:10.387 fused_ordering(700) 00:12:10.387 fused_ordering(701) 00:12:10.387 fused_ordering(702) 00:12:10.387 fused_ordering(703) 00:12:10.387 fused_ordering(704) 00:12:10.387 fused_ordering(705) 00:12:10.387 fused_ordering(706) 00:12:10.387 fused_ordering(707) 00:12:10.387 fused_ordering(708) 00:12:10.387 fused_ordering(709) 00:12:10.387 fused_ordering(710) 00:12:10.387 fused_ordering(711) 00:12:10.387 fused_ordering(712) 00:12:10.387 fused_ordering(713) 00:12:10.387 fused_ordering(714) 00:12:10.387 fused_ordering(715) 00:12:10.387 fused_ordering(716) 00:12:10.387 fused_ordering(717) 00:12:10.387 fused_ordering(718) 00:12:10.387 fused_ordering(719) 00:12:10.387 fused_ordering(720) 00:12:10.387 fused_ordering(721) 00:12:10.387 fused_ordering(722) 00:12:10.387 fused_ordering(723) 00:12:10.387 fused_ordering(724) 00:12:10.387 fused_ordering(725) 00:12:10.387 fused_ordering(726) 00:12:10.387 fused_ordering(727) 00:12:10.387 fused_ordering(728) 00:12:10.387 fused_ordering(729) 00:12:10.387 fused_ordering(730) 00:12:10.387 fused_ordering(731) 00:12:10.387 fused_ordering(732) 00:12:10.387 fused_ordering(733) 00:12:10.387 fused_ordering(734) 00:12:10.387 fused_ordering(735) 00:12:10.387 fused_ordering(736) 00:12:10.387 fused_ordering(737) 00:12:10.387 fused_ordering(738) 00:12:10.387 fused_ordering(739) 00:12:10.387 fused_ordering(740) 00:12:10.387 fused_ordering(741) 00:12:10.387 fused_ordering(742) 00:12:10.387 fused_ordering(743) 00:12:10.387 fused_ordering(744) 00:12:10.387 fused_ordering(745) 00:12:10.387 fused_ordering(746) 00:12:10.387 fused_ordering(747) 00:12:10.387 fused_ordering(748) 00:12:10.387 fused_ordering(749) 00:12:10.387 fused_ordering(750) 00:12:10.387 fused_ordering(751) 00:12:10.387 fused_ordering(752) 00:12:10.387 fused_ordering(753) 00:12:10.387 fused_ordering(754) 00:12:10.387 fused_ordering(755) 00:12:10.387 fused_ordering(756) 00:12:10.387 fused_ordering(757) 00:12:10.387 fused_ordering(758) 00:12:10.387 fused_ordering(759) 00:12:10.387 fused_ordering(760) 00:12:10.387 fused_ordering(761) 00:12:10.387 fused_ordering(762) 00:12:10.387 fused_ordering(763) 00:12:10.387 fused_ordering(764) 00:12:10.387 fused_ordering(765) 00:12:10.387 fused_ordering(766) 00:12:10.387 fused_ordering(767) 00:12:10.387 fused_ordering(768) 00:12:10.387 fused_ordering(769) 00:12:10.387 fused_ordering(770) 00:12:10.387 fused_ordering(771) 00:12:10.387 fused_ordering(772) 00:12:10.387 fused_ordering(773) 00:12:10.387 fused_ordering(774) 00:12:10.387 fused_ordering(775) 00:12:10.387 fused_ordering(776) 00:12:10.387 fused_ordering(777) 00:12:10.387 fused_ordering(778) 00:12:10.387 fused_ordering(779) 00:12:10.387 fused_ordering(780) 00:12:10.387 fused_ordering(781) 00:12:10.387 fused_ordering(782) 00:12:10.387 fused_ordering(783) 00:12:10.387 fused_ordering(784) 00:12:10.387 fused_ordering(785) 00:12:10.387 fused_ordering(786) 00:12:10.387 fused_ordering(787) 00:12:10.387 fused_ordering(788) 00:12:10.387 fused_ordering(789) 00:12:10.387 fused_ordering(790) 00:12:10.387 fused_ordering(791) 00:12:10.387 fused_ordering(792) 00:12:10.387 fused_ordering(793) 00:12:10.387 fused_ordering(794) 00:12:10.387 fused_ordering(795) 00:12:10.387 fused_ordering(796) 00:12:10.387 fused_ordering(797) 00:12:10.387 fused_ordering(798) 00:12:10.387 fused_ordering(799) 00:12:10.387 fused_ordering(800) 00:12:10.387 fused_ordering(801) 00:12:10.387 fused_ordering(802) 00:12:10.387 fused_ordering(803) 00:12:10.387 fused_ordering(804) 00:12:10.387 fused_ordering(805) 00:12:10.387 fused_ordering(806) 00:12:10.387 fused_ordering(807) 00:12:10.387 fused_ordering(808) 00:12:10.387 fused_ordering(809) 00:12:10.387 fused_ordering(810) 00:12:10.387 fused_ordering(811) 00:12:10.387 fused_ordering(812) 00:12:10.387 fused_ordering(813) 00:12:10.387 fused_ordering(814) 00:12:10.387 fused_ordering(815) 00:12:10.387 fused_ordering(816) 00:12:10.387 fused_ordering(817) 00:12:10.387 fused_ordering(818) 00:12:10.387 fused_ordering(819) 00:12:10.387 fused_ordering(820) 00:12:10.951 fused_ordering(821) 00:12:10.951 fused_ordering(822) 00:12:10.951 fused_ordering(823) 00:12:10.951 fused_ordering(824) 00:12:10.951 fused_ordering(825) 00:12:10.951 fused_ordering(826) 00:12:10.951 fused_ordering(827) 00:12:10.951 fused_ordering(828) 00:12:10.951 fused_ordering(829) 00:12:10.951 fused_ordering(830) 00:12:10.951 fused_ordering(831) 00:12:10.951 fused_ordering(832) 00:12:10.951 fused_ordering(833) 00:12:10.951 fused_ordering(834) 00:12:10.951 fused_ordering(835) 00:12:10.951 fused_ordering(836) 00:12:10.951 fused_ordering(837) 00:12:10.951 fused_ordering(838) 00:12:10.951 fused_ordering(839) 00:12:10.951 fused_ordering(840) 00:12:10.951 fused_ordering(841) 00:12:10.951 fused_ordering(842) 00:12:10.951 fused_ordering(843) 00:12:10.951 fused_ordering(844) 00:12:10.951 fused_ordering(845) 00:12:10.951 fused_ordering(846) 00:12:10.951 fused_ordering(847) 00:12:10.951 fused_ordering(848) 00:12:10.951 fused_ordering(849) 00:12:10.951 fused_ordering(850) 00:12:10.951 fused_ordering(851) 00:12:10.951 fused_ordering(852) 00:12:10.951 fused_ordering(853) 00:12:10.951 fused_ordering(854) 00:12:10.951 fused_ordering(855) 00:12:10.951 fused_ordering(856) 00:12:10.951 fused_ordering(857) 00:12:10.951 fused_ordering(858) 00:12:10.951 fused_ordering(859) 00:12:10.951 fused_ordering(860) 00:12:10.951 fused_ordering(861) 00:12:10.951 fused_ordering(862) 00:12:10.951 fused_ordering(863) 00:12:10.951 fused_ordering(864) 00:12:10.951 fused_ordering(865) 00:12:10.951 fused_ordering(866) 00:12:10.951 fused_ordering(867) 00:12:10.951 fused_ordering(868) 00:12:10.951 fused_ordering(869) 00:12:10.951 fused_ordering(870) 00:12:10.951 fused_ordering(871) 00:12:10.951 fused_ordering(872) 00:12:10.951 fused_ordering(873) 00:12:10.951 fused_ordering(874) 00:12:10.951 fused_ordering(875) 00:12:10.951 fused_ordering(876) 00:12:10.951 fused_ordering(877) 00:12:10.951 fused_ordering(878) 00:12:10.951 fused_ordering(879) 00:12:10.951 fused_ordering(880) 00:12:10.951 fused_ordering(881) 00:12:10.951 fused_ordering(882) 00:12:10.951 fused_ordering(883) 00:12:10.951 fused_ordering(884) 00:12:10.951 fused_ordering(885) 00:12:10.951 fused_ordering(886) 00:12:10.951 fused_ordering(887) 00:12:10.951 fused_ordering(888) 00:12:10.951 fused_ordering(889) 00:12:10.951 fused_ordering(890) 00:12:10.951 fused_ordering(891) 00:12:10.951 fused_ordering(892) 00:12:10.951 fused_ordering(893) 00:12:10.951 fused_ordering(894) 00:12:10.951 fused_ordering(895) 00:12:10.951 fused_ordering(896) 00:12:10.951 fused_ordering(897) 00:12:10.951 fused_ordering(898) 00:12:10.951 fused_ordering(899) 00:12:10.951 fused_ordering(900) 00:12:10.951 fused_ordering(901) 00:12:10.951 fused_ordering(902) 00:12:10.951 fused_ordering(903) 00:12:10.951 fused_ordering(904) 00:12:10.951 fused_ordering(905) 00:12:10.951 fused_ordering(906) 00:12:10.951 fused_ordering(907) 00:12:10.951 fused_ordering(908) 00:12:10.951 fused_ordering(909) 00:12:10.951 fused_ordering(910) 00:12:10.951 fused_ordering(911) 00:12:10.951 fused_ordering(912) 00:12:10.951 fused_ordering(913) 00:12:10.951 fused_ordering(914) 00:12:10.951 fused_ordering(915) 00:12:10.951 fused_ordering(916) 00:12:10.951 fused_ordering(917) 00:12:10.951 fused_ordering(918) 00:12:10.951 fused_ordering(919) 00:12:10.951 fused_ordering(920) 00:12:10.951 fused_ordering(921) 00:12:10.951 fused_ordering(922) 00:12:10.951 fused_ordering(923) 00:12:10.951 fused_ordering(924) 00:12:10.951 fused_ordering(925) 00:12:10.951 fused_ordering(926) 00:12:10.951 fused_ordering(927) 00:12:10.951 fused_ordering(928) 00:12:10.951 fused_ordering(929) 00:12:10.951 fused_ordering(930) 00:12:10.951 fused_ordering(931) 00:12:10.951 fused_ordering(932) 00:12:10.951 fused_ordering(933) 00:12:10.951 fused_ordering(934) 00:12:10.951 fused_ordering(935) 00:12:10.951 fused_ordering(936) 00:12:10.951 fused_ordering(937) 00:12:10.951 fused_ordering(938) 00:12:10.951 fused_ordering(939) 00:12:10.951 fused_ordering(940) 00:12:10.951 fused_ordering(941) 00:12:10.951 fused_ordering(942) 00:12:10.951 fused_ordering(943) 00:12:10.951 fused_ordering(944) 00:12:10.951 fused_ordering(945) 00:12:10.951 fused_ordering(946) 00:12:10.951 fused_ordering(947) 00:12:10.951 fused_ordering(948) 00:12:10.951 fused_ordering(949) 00:12:10.951 fused_ordering(950) 00:12:10.951 fused_ordering(951) 00:12:10.952 fused_ordering(952) 00:12:10.952 fused_ordering(953) 00:12:10.952 fused_ordering(954) 00:12:10.952 fused_ordering(955) 00:12:10.952 fused_ordering(956) 00:12:10.952 fused_ordering(957) 00:12:10.952 fused_ordering(958) 00:12:10.952 fused_ordering(959) 00:12:10.952 fused_ordering(960) 00:12:10.952 fused_ordering(961) 00:12:10.952 fused_ordering(962) 00:12:10.952 fused_ordering(963) 00:12:10.952 fused_ordering(964) 00:12:10.952 fused_ordering(965) 00:12:10.952 fused_ordering(966) 00:12:10.952 fused_ordering(967) 00:12:10.952 fused_ordering(968) 00:12:10.952 fused_ordering(969) 00:12:10.952 fused_ordering(970) 00:12:10.952 fused_ordering(971) 00:12:10.952 fused_ordering(972) 00:12:10.952 fused_ordering(973) 00:12:10.952 fused_ordering(974) 00:12:10.952 fused_ordering(975) 00:12:10.952 fused_ordering(976) 00:12:10.952 fused_ordering(977) 00:12:10.952 fused_ordering(978) 00:12:10.952 fused_ordering(979) 00:12:10.952 fused_ordering(980) 00:12:10.952 fused_ordering(981) 00:12:10.952 fused_ordering(982) 00:12:10.952 fused_ordering(983) 00:12:10.952 fused_ordering(984) 00:12:10.952 fused_ordering(985) 00:12:10.952 fused_ordering(986) 00:12:10.952 fused_ordering(987) 00:12:10.952 fused_ordering(988) 00:12:10.952 fused_ordering(989) 00:12:10.952 fused_ordering(990) 00:12:10.952 fused_ordering(991) 00:12:10.952 fused_ordering(992) 00:12:10.952 fused_ordering(993) 00:12:10.952 fused_ordering(994) 00:12:10.952 fused_ordering(995) 00:12:10.952 fused_ordering(996) 00:12:10.952 fused_ordering(997) 00:12:10.952 fused_ordering(998) 00:12:10.952 fused_ordering(999) 00:12:10.952 fused_ordering(1000) 00:12:10.952 fused_ordering(1001) 00:12:10.952 fused_ordering(1002) 00:12:10.952 fused_ordering(1003) 00:12:10.952 fused_ordering(1004) 00:12:10.952 fused_ordering(1005) 00:12:10.952 fused_ordering(1006) 00:12:10.952 fused_ordering(1007) 00:12:10.952 fused_ordering(1008) 00:12:10.952 fused_ordering(1009) 00:12:10.952 fused_ordering(1010) 00:12:10.952 fused_ordering(1011) 00:12:10.952 fused_ordering(1012) 00:12:10.952 fused_ordering(1013) 00:12:10.952 fused_ordering(1014) 00:12:10.952 fused_ordering(1015) 00:12:10.952 fused_ordering(1016) 00:12:10.952 fused_ordering(1017) 00:12:10.952 fused_ordering(1018) 00:12:10.952 fused_ordering(1019) 00:12:10.952 fused_ordering(1020) 00:12:10.952 fused_ordering(1021) 00:12:10.952 fused_ordering(1022) 00:12:10.952 fused_ordering(1023) 00:12:10.952 18:37:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:10.952 18:37:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:10.952 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.952 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.209 rmmod nvme_tcp 00:12:11.209 rmmod nvme_fabrics 00:12:11.209 rmmod nvme_keyring 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71728 ']' 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71728 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71728 ']' 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71728 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71728 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:11.209 killing process with pid 71728 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71728' 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71728 00:12:11.209 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71728 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:11.466 00:12:11.466 real 0m4.164s 00:12:11.466 user 0m4.792s 00:12:11.466 sys 0m1.577s 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.466 ************************************ 00:12:11.466 END TEST nvmf_fused_ordering 00:12:11.466 ************************************ 00:12:11.466 18:37:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:11.466 18:37:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:11.466 18:37:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:11.466 18:37:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.466 18:37:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.466 18:37:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.466 ************************************ 00:12:11.466 START TEST nvmf_delete_subsystem 00:12:11.466 ************************************ 00:12:11.466 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:11.466 * Looking for test storage... 00:12:11.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.723 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:11.724 18:37:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:11.724 Cannot find device "nvmf_tgt_br" 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.724 Cannot find device "nvmf_tgt_br2" 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:11.724 Cannot find device "nvmf_tgt_br" 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:11.724 Cannot find device "nvmf_tgt_br2" 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.724 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:11.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:12:11.983 00:12:11.983 --- 10.0.0.2 ping statistics --- 00:12:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.983 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:11.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:12:11.983 00:12:11.983 --- 10.0.0.3 ping statistics --- 00:12:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.983 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:11.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:12:11.983 00:12:11.983 --- 10.0.0.1 ping statistics --- 00:12:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.983 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71986 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71986 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71986 ']' 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.983 18:37:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.983 [2024-07-15 18:37:46.463328] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:11.983 [2024-07-15 18:37:46.463455] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.240 [2024-07-15 18:37:46.615327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:12.498 [2024-07-15 18:37:46.791679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.498 [2024-07-15 18:37:46.791771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.498 [2024-07-15 18:37:46.791789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.498 [2024-07-15 18:37:46.791803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.498 [2024-07-15 18:37:46.791815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.498 [2024-07-15 18:37:46.791994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.498 [2024-07-15 18:37:46.791999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.063 [2024-07-15 18:37:47.506783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.063 [2024-07-15 18:37:47.524271] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.063 NULL1 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.063 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.322 Delay0 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=72037 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:13.322 18:37:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:13.322 [2024-07-15 18:37:47.737900] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:15.221 18:37:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.221 18:37:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.221 18:37:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 Write completed with error (sct=0, sc=8) 00:12:15.479 Read completed with error (sct=0, sc=8) 00:12:15.479 starting I/O failed: -6 00:12:15.479 [2024-07-15 18:37:49.780932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc04a80 is same with the state(5) to be set 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 starting I/O failed: -6 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 [2024-07-15 18:37:49.782774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7068000c00 is same with the state(5) to be set 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Write completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:15.480 Read completed with error (sct=0, sc=8) 00:12:16.415 [2024-07-15 18:37:50.751636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1510 is same with the state(5) to be set 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 [2024-07-15 18:37:50.782861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f706800d740 is same with the state(5) to be set 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 [2024-07-15 18:37:50.783207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe16f0 is same with the state(5) to be set 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 [2024-07-15 18:37:50.783860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc034c0 is same with the state(5) to be set 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Write completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 Read completed with error (sct=0, sc=8) 00:12:16.415 [2024-07-15 18:37:50.784550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f706800cfe0 is same with the state(5) to be set 00:12:16.415 Initializing NVMe Controllers 00:12:16.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:16.416 Controller IO queue size 128, less than required. 00:12:16.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:16.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:16.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:16.416 Initialization complete. Launching workers. 00:12:16.416 ======================================================== 00:12:16.416 Latency(us) 00:12:16.416 Device Information : IOPS MiB/s Average min max 00:12:16.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.33 0.08 916206.10 503.24 1019774.57 00:12:16.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.90 0.08 1008823.40 386.17 2003918.03 00:12:16.416 ======================================================== 00:12:16.416 Total : 316.23 0.15 961572.63 386.17 2003918.03 00:12:16.416 00:12:16.416 [2024-07-15 18:37:50.785733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1510 (9): Bad file descriptor 00:12:16.416 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:16.416 18:37:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.416 18:37:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:16.416 18:37:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 72037 00:12:16.416 18:37:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 72037 00:12:16.981 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (72037) - No such process 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 72037 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 72037 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 72037 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.981 [2024-07-15 18:37:51.315868] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=72083 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:16.981 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:17.239 [2024-07-15 18:37:51.517389] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:17.497 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:17.497 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:17.497 18:37:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:18.062 18:37:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:18.062 18:37:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:18.062 18:37:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:18.650 18:37:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:18.650 18:37:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:18.650 18:37:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:18.907 18:37:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:18.907 18:37:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:18.907 18:37:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:19.472 18:37:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:19.472 18:37:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:19.472 18:37:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:20.039 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:20.039 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:20.039 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:20.297 Initializing NVMe Controllers 00:12:20.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:20.297 Controller IO queue size 128, less than required. 00:12:20.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:20.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:20.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:20.297 Initialization complete. Launching workers. 00:12:20.297 ======================================================== 00:12:20.297 Latency(us) 00:12:20.297 Device Information : IOPS MiB/s Average min max 00:12:20.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002776.55 1000153.65 1006896.40 00:12:20.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004414.20 1000169.51 1041419.21 00:12:20.297 ======================================================== 00:12:20.297 Total : 256.00 0.12 1003595.38 1000153.65 1041419.21 00:12:20.297 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72083 00:12:20.555 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (72083) - No such process 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 72083 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:20.555 rmmod nvme_tcp 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.555 rmmod nvme_fabrics 00:12:20.555 rmmod nvme_keyring 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71986 ']' 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71986 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71986 ']' 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71986 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.555 18:37:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71986 00:12:20.555 killing process with pid 71986 00:12:20.555 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:20.555 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:20.555 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71986' 00:12:20.555 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71986 00:12:20.555 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71986 00:12:21.117 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.117 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.117 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.117 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.117 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.118 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.118 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.118 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.118 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:21.118 00:12:21.118 real 0m9.555s 00:12:21.118 user 0m28.510s 00:12:21.118 sys 0m2.072s 00:12:21.118 ************************************ 00:12:21.118 END TEST nvmf_delete_subsystem 00:12:21.118 ************************************ 00:12:21.118 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.118 18:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:21.118 18:37:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:21.118 18:37:55 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:21.118 18:37:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:21.118 18:37:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.118 18:37:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.118 ************************************ 00:12:21.118 START TEST nvmf_ns_masking 00:12:21.118 ************************************ 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:21.118 * Looking for test storage... 00:12:21.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=abb76c0f-b279-4559-b358-bcf6d717c95e 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=12a77ffb-8049-48f9-adb7-cb48427b3a30 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3029bad9-a154-4fda-b328-6a39a124ec77 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:21.118 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:21.376 Cannot find device "nvmf_tgt_br" 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:21.376 Cannot find device "nvmf_tgt_br2" 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:21.376 Cannot find device "nvmf_tgt_br" 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:21.376 Cannot find device "nvmf_tgt_br2" 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:21.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:21.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:21.376 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:21.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:12:21.634 00:12:21.634 --- 10.0.0.2 ping statistics --- 00:12:21.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.634 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:21.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:21.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:12:21.634 00:12:21.634 --- 10.0.0.3 ping statistics --- 00:12:21.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.634 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:21.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:21.634 00:12:21.634 --- 10.0.0.1 ping statistics --- 00:12:21.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.634 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:21.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72319 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72319 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72319 ']' 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.634 18:37:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:21.634 [2024-07-15 18:37:56.051471] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:21.634 [2024-07-15 18:37:56.051787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.891 [2024-07-15 18:37:56.192450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.892 [2024-07-15 18:37:56.367200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.892 [2024-07-15 18:37:56.367509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.892 [2024-07-15 18:37:56.367666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.892 [2024-07-15 18:37:56.367753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.892 [2024-07-15 18:37:56.367764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.892 [2024-07-15 18:37:56.367808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.822 18:37:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.822 18:37:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:22.822 18:37:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.822 18:37:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:22.822 18:37:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:22.822 18:37:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.822 18:37:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:23.387 [2024-07-15 18:37:57.611912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.387 18:37:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:23.387 18:37:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:23.387 18:37:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:23.644 Malloc1 00:12:23.644 18:37:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:23.902 Malloc2 00:12:23.902 18:37:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.159 18:37:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:24.461 18:37:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.725 [2024-07-15 18:37:59.054041] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.725 18:37:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:24.725 18:37:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3029bad9-a154-4fda-b328-6a39a124ec77 -a 10.0.0.2 -s 4420 -i 4 00:12:24.725 18:37:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.725 18:37:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:24.725 18:37:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.725 18:37:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:24.725 18:37:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.253 [ 0]:0x1 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87cb1b04cead4378b79a86ef8a7eddf0 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87cb1b04cead4378b79a86ef8a7eddf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:27.253 [ 0]:0x1 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87cb1b04cead4378b79a86ef8a7eddf0 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87cb1b04cead4378b79a86ef8a7eddf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.253 [ 1]:0x2 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccbd75eb25c94cd0a8154c24cf70374e 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccbd75eb25c94cd0a8154c24cf70374e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:27.253 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.511 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.511 18:38:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3029bad9-a154-4fda-b328-6a39a124ec77 -a 10.0.0.2 -s 4420 -i 4 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:28.097 18:38:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:29.999 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:30.258 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:30.258 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:30.258 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.259 [ 0]:0x2 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccbd75eb25c94cd0a8154c24cf70374e 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccbd75eb25c94cd0a8154c24cf70374e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.259 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.516 [ 0]:0x1 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87cb1b04cead4378b79a86ef8a7eddf0 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87cb1b04cead4378b79a86ef8a7eddf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:30.516 [ 1]:0x2 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:30.516 18:38:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.775 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccbd75eb25c94cd0a8154c24cf70374e 00:12:30.775 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccbd75eb25c94cd0a8154c24cf70374e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.775 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.033 [ 0]:0x2 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccbd75eb25c94cd0a8154c24cf70374e 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccbd75eb25c94cd0a8154c24cf70374e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:31.033 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.291 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3029bad9-a154-4fda-b328-6a39a124ec77 -a 10.0.0.2 -s 4420 -i 4 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:31.549 18:38:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:34.075 18:38:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.075 [ 0]:0x1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87cb1b04cead4378b79a86ef8a7eddf0 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87cb1b04cead4378b79a86ef8a7eddf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:34.075 [ 1]:0x2 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccbd75eb25c94cd0a8154c24cf70374e 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccbd75eb25c94cd0a8154c24cf70374e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:34.075 [ 0]:0x2 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccbd75eb25c94cd0a8154c24cf70374e 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccbd75eb25c94cd0a8154c24cf70374e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:34.075 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:34.332 [2024-07-15 18:38:08.750121] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:34.332 2024/07/15 18:38:08 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:12:34.332 request: 00:12:34.332 { 00:12:34.332 "method": "nvmf_ns_remove_host", 00:12:34.332 "params": { 00:12:34.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.332 "nsid": 2, 00:12:34.332 "host": "nqn.2016-06.io.spdk:host1" 00:12:34.332 } 00:12:34.332 } 00:12:34.332 Got JSON-RPC error response 00:12:34.332 GoRPCClient: error on JSON-RPC call 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.332 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:34.333 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.333 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:34.590 [ 0]:0x2 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccbd75eb25c94cd0a8154c24cf70374e 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccbd75eb25c94cd0a8154c24cf70374e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72709 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72709 /var/tmp/host.sock 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72709 ']' 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.590 18:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 [2024-07-15 18:38:09.011171] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:34.590 [2024-07-15 18:38:09.011308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72709 ] 00:12:34.848 [2024-07-15 18:38:09.167708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.848 [2024-07-15 18:38:09.284680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.411 18:38:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.411 18:38:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:35.411 18:38:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.668 18:38:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:35.926 18:38:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid abb76c0f-b279-4559-b358-bcf6d717c95e 00:12:35.926 18:38:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:35.926 18:38:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ABB76C0FB2794559B358BCF6D717C95E -i 00:12:36.184 18:38:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 12a77ffb-8049-48f9-adb7-cb48427b3a30 00:12:36.184 18:38:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:36.184 18:38:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 12A77FFB804948F9ADB7CB48427B3A30 -i 00:12:36.505 18:38:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.763 18:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:37.020 18:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:37.021 18:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:37.279 nvme0n1 00:12:37.279 18:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:37.279 18:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:37.537 nvme1n2 00:12:37.794 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:37.794 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:37.794 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:37.794 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:37.794 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:38.051 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:38.051 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:38.051 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:38.051 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:38.309 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ abb76c0f-b279-4559-b358-bcf6d717c95e == \a\b\b\7\6\c\0\f\-\b\2\7\9\-\4\5\5\9\-\b\3\5\8\-\b\c\f\6\d\7\1\7\c\9\5\e ]] 00:12:38.309 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:38.309 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:38.309 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:38.566 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 12a77ffb-8049-48f9-adb7-cb48427b3a30 == \1\2\a\7\7\f\f\b\-\8\0\4\9\-\4\8\f\9\-\a\d\b\7\-\c\b\4\8\4\2\7\b\3\a\3\0 ]] 00:12:38.566 18:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72709 00:12:38.566 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72709 ']' 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72709 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72709 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:38.567 killing process with pid 72709 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72709' 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72709 00:12:38.567 18:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72709 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.133 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.133 rmmod nvme_tcp 00:12:39.390 rmmod nvme_fabrics 00:12:39.390 rmmod nvme_keyring 00:12:39.390 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.390 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:39.390 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:39.390 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72319 ']' 00:12:39.390 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72319 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72319 ']' 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72319 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72319 00:12:39.391 killing process with pid 72319 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72319' 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72319 00:12:39.391 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72319 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:39.649 ************************************ 00:12:39.649 END TEST nvmf_ns_masking 00:12:39.649 ************************************ 00:12:39.649 00:12:39.649 real 0m18.494s 00:12:39.649 user 0m28.513s 00:12:39.649 sys 0m3.596s 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.649 18:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.649 18:38:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.649 18:38:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:12:39.649 18:38:14 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:12:39.649 18:38:14 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:39.649 18:38:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.649 18:38:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.649 18:38:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.649 ************************************ 00:12:39.649 START TEST nvmf_host_management 00:12:39.649 ************************************ 00:12:39.649 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:39.649 * Looking for test storage... 00:12:39.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:39.649 18:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.649 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:39.906 Cannot find device "nvmf_tgt_br" 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.906 Cannot find device "nvmf_tgt_br2" 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:39.906 Cannot find device "nvmf_tgt_br" 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:39.906 Cannot find device "nvmf_tgt_br2" 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:39.906 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:40.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:12:40.163 00:12:40.163 --- 10.0.0.2 ping statistics --- 00:12:40.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.163 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:40.163 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:40.163 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:12:40.163 00:12:40.163 --- 10.0.0.3 ping statistics --- 00:12:40.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.163 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:40.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:12:40.163 00:12:40.163 --- 10.0.0.1 ping statistics --- 00:12:40.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.163 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=73064 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 73064 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 73064 ']' 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.163 18:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.421 [2024-07-15 18:38:14.667274] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:40.421 [2024-07-15 18:38:14.667400] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.421 [2024-07-15 18:38:14.821464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.678 [2024-07-15 18:38:14.943290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.678 [2024-07-15 18:38:14.943569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.678 [2024-07-15 18:38:14.943718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.678 [2024-07-15 18:38:14.943794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.678 [2024-07-15 18:38:14.943836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.678 [2024-07-15 18:38:14.944039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.678 [2024-07-15 18:38:14.944654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.678 [2024-07-15 18:38:14.944785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:40.678 [2024-07-15 18:38:14.944790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.610 [2024-07-15 18:38:15.890468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.610 Malloc0 00:12:41.610 [2024-07-15 18:38:15.966211] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.610 18:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=73144 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 73144 /var/tmp/bdevperf.sock 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 73144 ']' 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:41.610 { 00:12:41.610 "params": { 00:12:41.610 "name": "Nvme$subsystem", 00:12:41.610 "trtype": "$TEST_TRANSPORT", 00:12:41.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.610 "adrfam": "ipv4", 00:12:41.610 "trsvcid": "$NVMF_PORT", 00:12:41.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.610 "hdgst": ${hdgst:-false}, 00:12:41.610 "ddgst": ${ddgst:-false} 00:12:41.610 }, 00:12:41.610 "method": "bdev_nvme_attach_controller" 00:12:41.610 } 00:12:41.610 EOF 00:12:41.610 )") 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:41.610 18:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:41.610 "params": { 00:12:41.610 "name": "Nvme0", 00:12:41.610 "trtype": "tcp", 00:12:41.610 "traddr": "10.0.0.2", 00:12:41.610 "adrfam": "ipv4", 00:12:41.610 "trsvcid": "4420", 00:12:41.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:41.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:41.610 "hdgst": false, 00:12:41.610 "ddgst": false 00:12:41.610 }, 00:12:41.610 "method": "bdev_nvme_attach_controller" 00:12:41.610 }' 00:12:41.610 [2024-07-15 18:38:16.077731] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:41.610 [2024-07-15 18:38:16.077849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73144 ] 00:12:41.868 [2024-07-15 18:38:16.218871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.868 [2024-07-15 18:38:16.328243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.127 Running I/O for 10 seconds... 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.694 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:42.694 [2024-07-15 18:38:17.103878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.694 [2024-07-15 18:38:17.104216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.694 [2024-07-15 18:38:17.104375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.694 [2024-07-15 18:38:17.104509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.104994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a310 is same with the state(5) to be set 00:12:42.695 [2024-07-15 18:38:17.105333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.105970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.105986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.695 [2024-07-15 18:38:17.106193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.695 [2024-07-15 18:38:17.106205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.106980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.106995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:42.696 [2024-07-15 18:38:17.107440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.696 [2024-07-15 18:38:17.107542] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1304820 was disconnected and freed. reset controller. 00:12:42.696 [2024-07-15 18:38:17.109622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:42.696 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.696 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:42.696 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.696 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 task offset: 122880 on job bdev=Nvme0n1 fails 00:12:42.696 00:12:42.696 Latency(us) 00:12:42.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.696 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:42.696 Job: Nvme0n1 ended in about 0.61 seconds with error 00:12:42.696 Verification LBA range: start 0x0 length 0x400 00:12:42.697 Nvme0n1 : 0.61 1579.98 98.75 105.33 0.00 37045.67 5398.92 39945.75 00:12:42.697 =================================================================================================================== 00:12:42.697 Total : 1579.98 98.75 105.33 0.00 37045.67 5398.92 39945.75 00:12:42.697 [2024-07-15 18:38:17.113244] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:42.697 [2024-07-15 18:38:17.113300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304af0 (9): Bad file descriptor 00:12:42.697 [2024-07-15 18:38:17.118857] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:42.697 18:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.697 18:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 73144 00:12:44.066 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (73144) - No such process 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:44.066 { 00:12:44.066 "params": { 00:12:44.066 "name": "Nvme$subsystem", 00:12:44.066 "trtype": "$TEST_TRANSPORT", 00:12:44.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:44.066 "adrfam": "ipv4", 00:12:44.066 "trsvcid": "$NVMF_PORT", 00:12:44.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:44.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:44.066 "hdgst": ${hdgst:-false}, 00:12:44.066 "ddgst": ${ddgst:-false} 00:12:44.066 }, 00:12:44.066 "method": "bdev_nvme_attach_controller" 00:12:44.066 } 00:12:44.066 EOF 00:12:44.066 )") 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:44.066 18:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:44.066 "params": { 00:12:44.066 "name": "Nvme0", 00:12:44.066 "trtype": "tcp", 00:12:44.066 "traddr": "10.0.0.2", 00:12:44.066 "adrfam": "ipv4", 00:12:44.066 "trsvcid": "4420", 00:12:44.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:44.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:44.066 "hdgst": false, 00:12:44.066 "ddgst": false 00:12:44.066 }, 00:12:44.066 "method": "bdev_nvme_attach_controller" 00:12:44.066 }' 00:12:44.066 [2024-07-15 18:38:18.192864] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:44.066 [2024-07-15 18:38:18.192988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73194 ] 00:12:44.066 [2024-07-15 18:38:18.344471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.066 [2024-07-15 18:38:18.518332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.323 Running I/O for 1 seconds... 00:12:45.695 00:12:45.695 Latency(us) 00:12:45.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.695 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:45.695 Verification LBA range: start 0x0 length 0x400 00:12:45.695 Nvme0n1 : 1.00 1468.31 91.77 0.00 0.00 42786.04 5867.03 42692.02 00:12:45.695 =================================================================================================================== 00:12:45.695 Total : 1468.31 91.77 0.00 0.00 42786.04 5867.03 42692.02 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.695 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.695 rmmod nvme_tcp 00:12:45.954 rmmod nvme_fabrics 00:12:45.954 rmmod nvme_keyring 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 73064 ']' 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 73064 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 73064 ']' 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 73064 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73064 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:45.954 killing process with pid 73064 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73064' 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 73064 00:12:45.954 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 73064 00:12:46.212 [2024-07-15 18:38:20.440739] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:46.212 00:12:46.212 real 0m6.490s 00:12:46.212 user 0m24.935s 00:12:46.212 sys 0m1.721s 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.212 18:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:46.212 ************************************ 00:12:46.212 END TEST nvmf_host_management 00:12:46.212 ************************************ 00:12:46.212 18:38:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:46.212 18:38:20 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:46.212 18:38:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:46.212 18:38:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.212 18:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.212 ************************************ 00:12:46.212 START TEST nvmf_lvol 00:12:46.212 ************************************ 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:46.212 * Looking for test storage... 00:12:46.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.212 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.470 18:38:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:46.471 Cannot find device "nvmf_tgt_br" 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.471 Cannot find device "nvmf_tgt_br2" 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:46.471 Cannot find device "nvmf_tgt_br" 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:46.471 Cannot find device "nvmf_tgt_br2" 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:46.471 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:46.728 18:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:46.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:12:46.728 00:12:46.728 --- 10.0.0.2 ping statistics --- 00:12:46.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.728 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:46.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:12:46.728 00:12:46.728 --- 10.0.0.3 ping statistics --- 00:12:46.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.728 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:12:46.728 00:12:46.728 --- 10.0.0.1 ping statistics --- 00:12:46.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.728 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73404 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73404 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73404 ']' 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.728 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.729 18:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.729 [2024-07-15 18:38:21.154531] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:46.729 [2024-07-15 18:38:21.154619] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.986 [2024-07-15 18:38:21.295125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.986 [2024-07-15 18:38:21.468550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.986 [2024-07-15 18:38:21.468630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.986 [2024-07-15 18:38:21.468646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.986 [2024-07-15 18:38:21.468660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.986 [2024-07-15 18:38:21.468671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.243 [2024-07-15 18:38:21.469194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.243 [2024-07-15 18:38:21.469319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.243 [2024-07-15 18:38:21.469323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.809 18:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.809 18:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:47.809 18:38:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.809 18:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.809 18:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:47.809 18:38:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.809 18:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:48.373 [2024-07-15 18:38:22.549541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.373 18:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:48.631 18:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:48.631 18:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:48.942 18:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:48.942 18:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:49.202 18:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:49.459 18:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d3da1cfc-1627-4177-b855-f6117c2f7be8 00:12:49.459 18:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3da1cfc-1627-4177-b855-f6117c2f7be8 lvol 20 00:12:49.717 18:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fd965f5d-f281-48de-93d5-462dff850ffb 00:12:49.717 18:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:49.717 18:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd965f5d-f281-48de-93d5-462dff850ffb 00:12:49.974 18:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:50.232 [2024-07-15 18:38:24.625251] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.232 18:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.489 18:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73552 00:12:50.489 18:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:50.489 18:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:51.421 18:38:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot fd965f5d-f281-48de-93d5-462dff850ffb MY_SNAPSHOT 00:12:51.985 18:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7fd6f0a4-8ae2-495a-b4b8-f5040044991d 00:12:51.985 18:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize fd965f5d-f281-48de-93d5-462dff850ffb 30 00:12:52.242 18:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 7fd6f0a4-8ae2-495a-b4b8-f5040044991d MY_CLONE 00:12:52.499 18:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=440fa551-06c6-4948-b0a4-99ac4364b341 00:12:52.499 18:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 440fa551-06c6-4948-b0a4-99ac4364b341 00:12:53.442 18:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73552 00:13:01.547 Initializing NVMe Controllers 00:13:01.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:01.547 Controller IO queue size 128, less than required. 00:13:01.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:01.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:01.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:01.547 Initialization complete. Launching workers. 00:13:01.547 ======================================================== 00:13:01.547 Latency(us) 00:13:01.547 Device Information : IOPS MiB/s Average min max 00:13:01.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9921.55 38.76 12903.82 2774.69 85861.48 00:13:01.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9802.35 38.29 13059.54 3210.79 84981.09 00:13:01.547 ======================================================== 00:13:01.547 Total : 19723.89 77.05 12981.21 2774.69 85861.48 00:13:01.547 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fd965f5d-f281-48de-93d5-462dff850ffb 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3da1cfc-1627-4177-b855-f6117c2f7be8 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.547 18:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.547 rmmod nvme_tcp 00:13:01.547 rmmod nvme_fabrics 00:13:01.547 rmmod nvme_keyring 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73404 ']' 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73404 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73404 ']' 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73404 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73404 00:13:01.806 killing process with pid 73404 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73404' 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73404 00:13:01.806 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73404 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:02.064 ************************************ 00:13:02.064 END TEST nvmf_lvol 00:13:02.064 ************************************ 00:13:02.064 00:13:02.064 real 0m15.822s 00:13:02.064 user 1m4.369s 00:13:02.064 sys 0m5.269s 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.064 18:38:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:02.064 18:38:36 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:02.064 18:38:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:02.064 18:38:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.064 18:38:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.064 ************************************ 00:13:02.064 START TEST nvmf_lvs_grow 00:13:02.064 ************************************ 00:13:02.064 18:38:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:02.323 * Looking for test storage... 00:13:02.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.323 18:38:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:02.324 Cannot find device "nvmf_tgt_br" 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.324 Cannot find device "nvmf_tgt_br2" 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:02.324 Cannot find device "nvmf_tgt_br" 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:02.324 Cannot find device "nvmf_tgt_br2" 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.324 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:02.582 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:02.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:13:02.583 00:13:02.583 --- 10.0.0.2 ping statistics --- 00:13:02.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.583 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:02.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:13:02.583 00:13:02.583 --- 10.0.0.3 ping statistics --- 00:13:02.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.583 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:13:02.583 00:13:02.583 --- 10.0.0.1 ping statistics --- 00:13:02.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.583 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.583 18:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:02.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73913 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73913 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73913 ']' 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.583 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:02.583 [2024-07-15 18:38:37.057715] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:02.583 [2024-07-15 18:38:37.057813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.841 [2024-07-15 18:38:37.195451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.841 [2024-07-15 18:38:37.303892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.841 [2024-07-15 18:38:37.303972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.841 [2024-07-15 18:38:37.303992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.841 [2024-07-15 18:38:37.304022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.841 [2024-07-15 18:38:37.304035] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.841 [2024-07-15 18:38:37.304090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.786 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.786 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:03.786 18:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.786 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.786 18:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:03.786 18:38:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.786 18:38:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:04.043 [2024-07-15 18:38:38.283845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:04.043 ************************************ 00:13:04.043 START TEST lvs_grow_clean 00:13:04.043 ************************************ 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:04.043 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:04.301 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:04.301 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:04.558 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:04.558 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:04.558 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:04.816 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:04.816 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:04.816 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5e3488b3-0d2e-4925-af5d-ea198217c1de lvol 150 00:13:05.073 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5963f97d-78bd-4383-92bd-67c31279e78f 00:13:05.073 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:05.073 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:05.330 [2024-07-15 18:38:39.681775] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:05.330 [2024-07-15 18:38:39.681855] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:05.330 true 00:13:05.331 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:05.331 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:05.588 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:05.588 18:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:05.845 18:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5963f97d-78bd-4383-92bd-67c31279e78f 00:13:06.102 18:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:06.360 [2024-07-15 18:38:40.794376] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.360 18:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74080 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74080 /var/tmp/bdevperf.sock 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 74080 ']' 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.618 18:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:06.875 [2024-07-15 18:38:41.137797] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:06.875 [2024-07-15 18:38:41.137904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74080 ] 00:13:06.875 [2024-07-15 18:38:41.278302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.132 [2024-07-15 18:38:41.387165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.696 18:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.696 18:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:07.696 18:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:07.954 Nvme0n1 00:13:08.211 18:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:08.211 [ 00:13:08.211 { 00:13:08.211 "aliases": [ 00:13:08.211 "5963f97d-78bd-4383-92bd-67c31279e78f" 00:13:08.211 ], 00:13:08.211 "assigned_rate_limits": { 00:13:08.211 "r_mbytes_per_sec": 0, 00:13:08.211 "rw_ios_per_sec": 0, 00:13:08.211 "rw_mbytes_per_sec": 0, 00:13:08.211 "w_mbytes_per_sec": 0 00:13:08.211 }, 00:13:08.211 "block_size": 4096, 00:13:08.211 "claimed": false, 00:13:08.211 "driver_specific": { 00:13:08.211 "mp_policy": "active_passive", 00:13:08.211 "nvme": [ 00:13:08.211 { 00:13:08.211 "ctrlr_data": { 00:13:08.211 "ana_reporting": false, 00:13:08.211 "cntlid": 1, 00:13:08.211 "firmware_revision": "24.09", 00:13:08.211 "model_number": "SPDK bdev Controller", 00:13:08.211 "multi_ctrlr": true, 00:13:08.211 "oacs": { 00:13:08.211 "firmware": 0, 00:13:08.211 "format": 0, 00:13:08.211 "ns_manage": 0, 00:13:08.211 "security": 0 00:13:08.211 }, 00:13:08.211 "serial_number": "SPDK0", 00:13:08.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.211 "vendor_id": "0x8086" 00:13:08.211 }, 00:13:08.211 "ns_data": { 00:13:08.211 "can_share": true, 00:13:08.211 "id": 1 00:13:08.211 }, 00:13:08.211 "trid": { 00:13:08.211 "adrfam": "IPv4", 00:13:08.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.211 "traddr": "10.0.0.2", 00:13:08.211 "trsvcid": "4420", 00:13:08.211 "trtype": "TCP" 00:13:08.211 }, 00:13:08.211 "vs": { 00:13:08.211 "nvme_version": "1.3" 00:13:08.211 } 00:13:08.211 } 00:13:08.211 ] 00:13:08.211 }, 00:13:08.211 "memory_domains": [ 00:13:08.211 { 00:13:08.211 "dma_device_id": "system", 00:13:08.211 "dma_device_type": 1 00:13:08.211 } 00:13:08.211 ], 00:13:08.211 "name": "Nvme0n1", 00:13:08.211 "num_blocks": 38912, 00:13:08.211 "product_name": "NVMe disk", 00:13:08.211 "supported_io_types": { 00:13:08.211 "abort": true, 00:13:08.211 "compare": true, 00:13:08.211 "compare_and_write": true, 00:13:08.211 "copy": true, 00:13:08.211 "flush": true, 00:13:08.212 "get_zone_info": false, 00:13:08.212 "nvme_admin": true, 00:13:08.212 "nvme_io": true, 00:13:08.212 "nvme_io_md": false, 00:13:08.212 "nvme_iov_md": false, 00:13:08.212 "read": true, 00:13:08.212 "reset": true, 00:13:08.212 "seek_data": false, 00:13:08.212 "seek_hole": false, 00:13:08.212 "unmap": true, 00:13:08.212 "write": true, 00:13:08.212 "write_zeroes": true, 00:13:08.212 "zcopy": false, 00:13:08.212 "zone_append": false, 00:13:08.212 "zone_management": false 00:13:08.212 }, 00:13:08.212 "uuid": "5963f97d-78bd-4383-92bd-67c31279e78f", 00:13:08.212 "zoned": false 00:13:08.212 } 00:13:08.212 ] 00:13:08.212 18:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74129 00:13:08.212 18:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:08.212 18:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:08.469 Running I/O for 10 seconds... 00:13:09.400 Latency(us) 00:13:09.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.400 Nvme0n1 : 1.00 9119.00 35.62 0.00 0.00 0.00 0.00 0.00 00:13:09.400 =================================================================================================================== 00:13:09.400 Total : 9119.00 35.62 0.00 0.00 0.00 0.00 0.00 00:13:09.400 00:13:10.331 18:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:10.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.331 Nvme0n1 : 2.00 9129.00 35.66 0.00 0.00 0.00 0.00 0.00 00:13:10.331 =================================================================================================================== 00:13:10.331 Total : 9129.00 35.66 0.00 0.00 0.00 0.00 0.00 00:13:10.331 00:13:10.587 true 00:13:10.587 18:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:10.587 18:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:10.844 18:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:10.844 18:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:10.845 18:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 74129 00:13:11.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.408 Nvme0n1 : 3.00 9129.33 35.66 0.00 0.00 0.00 0.00 0.00 00:13:11.408 =================================================================================================================== 00:13:11.408 Total : 9129.33 35.66 0.00 0.00 0.00 0.00 0.00 00:13:11.408 00:13:12.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.340 Nvme0n1 : 4.00 9097.50 35.54 0.00 0.00 0.00 0.00 0.00 00:13:12.340 =================================================================================================================== 00:13:12.340 Total : 9097.50 35.54 0.00 0.00 0.00 0.00 0.00 00:13:12.340 00:13:13.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.287 Nvme0n1 : 5.00 9048.20 35.34 0.00 0.00 0.00 0.00 0.00 00:13:13.287 =================================================================================================================== 00:13:13.287 Total : 9048.20 35.34 0.00 0.00 0.00 0.00 0.00 00:13:13.287 00:13:14.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.661 Nvme0n1 : 6.00 8971.83 35.05 0.00 0.00 0.00 0.00 0.00 00:13:14.661 =================================================================================================================== 00:13:14.661 Total : 8971.83 35.05 0.00 0.00 0.00 0.00 0.00 00:13:14.661 00:13:15.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.594 Nvme0n1 : 7.00 8830.00 34.49 0.00 0.00 0.00 0.00 0.00 00:13:15.594 =================================================================================================================== 00:13:15.594 Total : 8830.00 34.49 0.00 0.00 0.00 0.00 0.00 00:13:15.594 00:13:16.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.527 Nvme0n1 : 8.00 8730.62 34.10 0.00 0.00 0.00 0.00 0.00 00:13:16.527 =================================================================================================================== 00:13:16.527 Total : 8730.62 34.10 0.00 0.00 0.00 0.00 0.00 00:13:16.527 00:13:17.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.461 Nvme0n1 : 9.00 8618.89 33.67 0.00 0.00 0.00 0.00 0.00 00:13:17.461 =================================================================================================================== 00:13:17.461 Total : 8618.89 33.67 0.00 0.00 0.00 0.00 0.00 00:13:17.461 00:13:18.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.395 Nvme0n1 : 10.00 8548.00 33.39 0.00 0.00 0.00 0.00 0.00 00:13:18.395 =================================================================================================================== 00:13:18.395 Total : 8548.00 33.39 0.00 0.00 0.00 0.00 0.00 00:13:18.395 00:13:18.395 00:13:18.395 Latency(us) 00:13:18.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.395 Nvme0n1 : 10.01 8551.10 33.40 0.00 0.00 14959.51 6616.02 47185.92 00:13:18.395 =================================================================================================================== 00:13:18.395 Total : 8551.10 33.40 0.00 0.00 14959.51 6616.02 47185.92 00:13:18.395 0 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74080 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 74080 ']' 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 74080 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74080 00:13:18.395 killing process with pid 74080 00:13:18.395 Received shutdown signal, test time was about 10.000000 seconds 00:13:18.395 00:13:18.395 Latency(us) 00:13:18.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.395 =================================================================================================================== 00:13:18.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74080' 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 74080 00:13:18.395 18:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 74080 00:13:18.652 18:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.909 18:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:19.166 18:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:19.166 18:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:19.423 18:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:19.423 18:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:19.423 18:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:19.680 [2024-07-15 18:38:54.041723] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:19.680 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:19.680 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:19.680 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:19.681 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:19.938 2024/07/15 18:38:54 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5e3488b3-0d2e-4925-af5d-ea198217c1de], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:13:19.938 request: 00:13:19.938 { 00:13:19.938 "method": "bdev_lvol_get_lvstores", 00:13:19.938 "params": { 00:13:19.938 "uuid": "5e3488b3-0d2e-4925-af5d-ea198217c1de" 00:13:19.938 } 00:13:19.938 } 00:13:19.938 Got JSON-RPC error response 00:13:19.938 GoRPCClient: error on JSON-RPC call 00:13:19.938 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:19.938 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.938 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:19.938 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.938 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.196 aio_bdev 00:13:20.196 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5963f97d-78bd-4383-92bd-67c31279e78f 00:13:20.196 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5963f97d-78bd-4383-92bd-67c31279e78f 00:13:20.196 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:20.196 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:20.196 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:20.196 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:20.196 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:20.454 18:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5963f97d-78bd-4383-92bd-67c31279e78f -t 2000 00:13:20.712 [ 00:13:20.712 { 00:13:20.712 "aliases": [ 00:13:20.712 "lvs/lvol" 00:13:20.712 ], 00:13:20.712 "assigned_rate_limits": { 00:13:20.712 "r_mbytes_per_sec": 0, 00:13:20.712 "rw_ios_per_sec": 0, 00:13:20.712 "rw_mbytes_per_sec": 0, 00:13:20.712 "w_mbytes_per_sec": 0 00:13:20.712 }, 00:13:20.712 "block_size": 4096, 00:13:20.712 "claimed": false, 00:13:20.712 "driver_specific": { 00:13:20.712 "lvol": { 00:13:20.712 "base_bdev": "aio_bdev", 00:13:20.712 "clone": false, 00:13:20.712 "esnap_clone": false, 00:13:20.712 "lvol_store_uuid": "5e3488b3-0d2e-4925-af5d-ea198217c1de", 00:13:20.712 "num_allocated_clusters": 38, 00:13:20.712 "snapshot": false, 00:13:20.712 "thin_provision": false 00:13:20.712 } 00:13:20.712 }, 00:13:20.712 "name": "5963f97d-78bd-4383-92bd-67c31279e78f", 00:13:20.712 "num_blocks": 38912, 00:13:20.712 "product_name": "Logical Volume", 00:13:20.712 "supported_io_types": { 00:13:20.712 "abort": false, 00:13:20.712 "compare": false, 00:13:20.712 "compare_and_write": false, 00:13:20.712 "copy": false, 00:13:20.712 "flush": false, 00:13:20.712 "get_zone_info": false, 00:13:20.712 "nvme_admin": false, 00:13:20.712 "nvme_io": false, 00:13:20.712 "nvme_io_md": false, 00:13:20.712 "nvme_iov_md": false, 00:13:20.712 "read": true, 00:13:20.712 "reset": true, 00:13:20.712 "seek_data": true, 00:13:20.712 "seek_hole": true, 00:13:20.712 "unmap": true, 00:13:20.712 "write": true, 00:13:20.712 "write_zeroes": true, 00:13:20.712 "zcopy": false, 00:13:20.712 "zone_append": false, 00:13:20.712 "zone_management": false 00:13:20.712 }, 00:13:20.712 "uuid": "5963f97d-78bd-4383-92bd-67c31279e78f", 00:13:20.712 "zoned": false 00:13:20.712 } 00:13:20.712 ] 00:13:20.712 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:20.712 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:20.712 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:21.278 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:21.278 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:21.278 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:21.278 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:21.278 18:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5963f97d-78bd-4383-92bd-67c31279e78f 00:13:21.843 18:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e3488b3-0d2e-4925-af5d-ea198217c1de 00:13:22.100 18:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:22.100 18:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:22.666 ************************************ 00:13:22.666 END TEST lvs_grow_clean 00:13:22.666 ************************************ 00:13:22.666 00:13:22.666 real 0m18.724s 00:13:22.666 user 0m17.174s 00:13:22.666 sys 0m2.934s 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:22.666 ************************************ 00:13:22.666 START TEST lvs_grow_dirty 00:13:22.666 ************************************ 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:22.666 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:22.924 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:22.924 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:23.183 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:23.183 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:23.183 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:23.747 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:23.747 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:23.747 18:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 lvol 150 00:13:24.004 18:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa71ba15-e78c-4832-99b5-6abdf6001312 00:13:24.004 18:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:24.004 18:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:24.261 [2024-07-15 18:38:58.505864] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:24.261 [2024-07-15 18:38:58.505983] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:24.261 true 00:13:24.261 18:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:24.261 18:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:24.518 18:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:24.518 18:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:24.776 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa71ba15-e78c-4832-99b5-6abdf6001312 00:13:25.033 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:25.311 [2024-07-15 18:38:59.642524] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.311 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.568 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74537 00:13:25.568 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:25.568 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:25.568 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74537 /var/tmp/bdevperf.sock 00:13:25.568 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74537 ']' 00:13:25.568 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.568 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.569 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.569 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.569 18:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:25.569 [2024-07-15 18:39:00.015140] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:25.569 [2024-07-15 18:39:00.015253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74537 ] 00:13:25.826 [2024-07-15 18:39:00.154755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.826 [2024-07-15 18:39:00.275009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.757 18:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.757 18:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:26.757 18:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:27.014 Nvme0n1 00:13:27.014 18:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:27.272 [ 00:13:27.272 { 00:13:27.272 "aliases": [ 00:13:27.272 "aa71ba15-e78c-4832-99b5-6abdf6001312" 00:13:27.272 ], 00:13:27.272 "assigned_rate_limits": { 00:13:27.272 "r_mbytes_per_sec": 0, 00:13:27.272 "rw_ios_per_sec": 0, 00:13:27.272 "rw_mbytes_per_sec": 0, 00:13:27.272 "w_mbytes_per_sec": 0 00:13:27.272 }, 00:13:27.272 "block_size": 4096, 00:13:27.272 "claimed": false, 00:13:27.272 "driver_specific": { 00:13:27.272 "mp_policy": "active_passive", 00:13:27.272 "nvme": [ 00:13:27.272 { 00:13:27.272 "ctrlr_data": { 00:13:27.272 "ana_reporting": false, 00:13:27.272 "cntlid": 1, 00:13:27.272 "firmware_revision": "24.09", 00:13:27.272 "model_number": "SPDK bdev Controller", 00:13:27.272 "multi_ctrlr": true, 00:13:27.272 "oacs": { 00:13:27.272 "firmware": 0, 00:13:27.272 "format": 0, 00:13:27.272 "ns_manage": 0, 00:13:27.272 "security": 0 00:13:27.272 }, 00:13:27.272 "serial_number": "SPDK0", 00:13:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:27.272 "vendor_id": "0x8086" 00:13:27.272 }, 00:13:27.272 "ns_data": { 00:13:27.272 "can_share": true, 00:13:27.272 "id": 1 00:13:27.272 }, 00:13:27.272 "trid": { 00:13:27.272 "adrfam": "IPv4", 00:13:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:27.272 "traddr": "10.0.0.2", 00:13:27.272 "trsvcid": "4420", 00:13:27.272 "trtype": "TCP" 00:13:27.272 }, 00:13:27.272 "vs": { 00:13:27.272 "nvme_version": "1.3" 00:13:27.272 } 00:13:27.272 } 00:13:27.272 ] 00:13:27.272 }, 00:13:27.272 "memory_domains": [ 00:13:27.272 { 00:13:27.272 "dma_device_id": "system", 00:13:27.272 "dma_device_type": 1 00:13:27.272 } 00:13:27.272 ], 00:13:27.272 "name": "Nvme0n1", 00:13:27.272 "num_blocks": 38912, 00:13:27.272 "product_name": "NVMe disk", 00:13:27.272 "supported_io_types": { 00:13:27.272 "abort": true, 00:13:27.272 "compare": true, 00:13:27.272 "compare_and_write": true, 00:13:27.273 "copy": true, 00:13:27.273 "flush": true, 00:13:27.273 "get_zone_info": false, 00:13:27.273 "nvme_admin": true, 00:13:27.273 "nvme_io": true, 00:13:27.273 "nvme_io_md": false, 00:13:27.273 "nvme_iov_md": false, 00:13:27.273 "read": true, 00:13:27.273 "reset": true, 00:13:27.273 "seek_data": false, 00:13:27.273 "seek_hole": false, 00:13:27.273 "unmap": true, 00:13:27.273 "write": true, 00:13:27.273 "write_zeroes": true, 00:13:27.273 "zcopy": false, 00:13:27.273 "zone_append": false, 00:13:27.273 "zone_management": false 00:13:27.273 }, 00:13:27.273 "uuid": "aa71ba15-e78c-4832-99b5-6abdf6001312", 00:13:27.273 "zoned": false 00:13:27.273 } 00:13:27.273 ] 00:13:27.273 18:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74585 00:13:27.273 18:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:27.273 18:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:27.531 Running I/O for 10 seconds... 00:13:28.464 Latency(us) 00:13:28.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.464 Nvme0n1 : 1.00 9587.00 37.45 0.00 0.00 0.00 0.00 0.00 00:13:28.464 =================================================================================================================== 00:13:28.464 Total : 9587.00 37.45 0.00 0.00 0.00 0.00 0.00 00:13:28.464 00:13:29.433 18:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:29.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.433 Nvme0n1 : 2.00 8899.00 34.76 0.00 0.00 0.00 0.00 0.00 00:13:29.433 =================================================================================================================== 00:13:29.433 Total : 8899.00 34.76 0.00 0.00 0.00 0.00 0.00 00:13:29.433 00:13:29.690 true 00:13:29.690 18:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:29.690 18:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:29.948 18:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:29.948 18:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:29.948 18:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74585 00:13:30.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.514 Nvme0n1 : 3.00 8829.33 34.49 0.00 0.00 0.00 0.00 0.00 00:13:30.514 =================================================================================================================== 00:13:30.514 Total : 8829.33 34.49 0.00 0.00 0.00 0.00 0.00 00:13:30.514 00:13:31.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.448 Nvme0n1 : 4.00 8788.75 34.33 0.00 0.00 0.00 0.00 0.00 00:13:31.448 =================================================================================================================== 00:13:31.448 Total : 8788.75 34.33 0.00 0.00 0.00 0.00 0.00 00:13:31.448 00:13:32.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.821 Nvme0n1 : 5.00 8578.60 33.51 0.00 0.00 0.00 0.00 0.00 00:13:32.821 =================================================================================================================== 00:13:32.821 Total : 8578.60 33.51 0.00 0.00 0.00 0.00 0.00 00:13:32.821 00:13:33.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.755 Nvme0n1 : 6.00 8297.33 32.41 0.00 0.00 0.00 0.00 0.00 00:13:33.755 =================================================================================================================== 00:13:33.755 Total : 8297.33 32.41 0.00 0.00 0.00 0.00 0.00 00:13:33.755 00:13:34.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.691 Nvme0n1 : 7.00 8305.14 32.44 0.00 0.00 0.00 0.00 0.00 00:13:34.691 =================================================================================================================== 00:13:34.691 Total : 8305.14 32.44 0.00 0.00 0.00 0.00 0.00 00:13:34.691 00:13:35.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.627 Nvme0n1 : 8.00 8407.50 32.84 0.00 0.00 0.00 0.00 0.00 00:13:35.627 =================================================================================================================== 00:13:35.627 Total : 8407.50 32.84 0.00 0.00 0.00 0.00 0.00 00:13:35.627 00:13:36.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.559 Nvme0n1 : 9.00 8442.78 32.98 0.00 0.00 0.00 0.00 0.00 00:13:36.559 =================================================================================================================== 00:13:36.559 Total : 8442.78 32.98 0.00 0.00 0.00 0.00 0.00 00:13:36.559 00:13:37.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.493 Nvme0n1 : 10.00 8494.80 33.18 0.00 0.00 0.00 0.00 0.00 00:13:37.493 =================================================================================================================== 00:13:37.493 Total : 8494.80 33.18 0.00 0.00 0.00 0.00 0.00 00:13:37.493 00:13:37.493 00:13:37.493 Latency(us) 00:13:37.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.493 Nvme0n1 : 10.00 8492.30 33.17 0.00 0.00 15065.29 6023.07 226692.14 00:13:37.493 =================================================================================================================== 00:13:37.493 Total : 8492.30 33.17 0.00 0.00 15065.29 6023.07 226692.14 00:13:37.493 0 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74537 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74537 ']' 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74537 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74537 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:37.493 killing process with pid 74537 00:13:37.493 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.493 00:13:37.493 Latency(us) 00:13:37.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.493 =================================================================================================================== 00:13:37.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74537' 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74537 00:13:37.493 18:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74537 00:13:37.750 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:38.012 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:38.282 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:38.282 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73913 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73913 00:13:38.540 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73913 Killed "${NVMF_APP[@]}" "$@" 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.540 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74749 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74749 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74749 ']' 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:38.541 18:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:38.798 [2024-07-15 18:39:13.055795] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:38.798 [2024-07-15 18:39:13.055906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.798 [2024-07-15 18:39:13.204433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.056 [2024-07-15 18:39:13.309584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.056 [2024-07-15 18:39:13.309646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.056 [2024-07-15 18:39:13.309660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.056 [2024-07-15 18:39:13.309669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.056 [2024-07-15 18:39:13.309677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.056 [2024-07-15 18:39:13.309705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.621 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.621 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:39.621 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.621 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.621 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:39.621 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.621 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:39.880 [2024-07-15 18:39:14.247822] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:39.880 [2024-07-15 18:39:14.248034] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:39.880 [2024-07-15 18:39:14.248274] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aa71ba15-e78c-4832-99b5-6abdf6001312 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=aa71ba15-e78c-4832-99b5-6abdf6001312 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:39.880 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:40.138 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa71ba15-e78c-4832-99b5-6abdf6001312 -t 2000 00:13:40.397 [ 00:13:40.397 { 00:13:40.397 "aliases": [ 00:13:40.397 "lvs/lvol" 00:13:40.397 ], 00:13:40.397 "assigned_rate_limits": { 00:13:40.397 "r_mbytes_per_sec": 0, 00:13:40.397 "rw_ios_per_sec": 0, 00:13:40.397 "rw_mbytes_per_sec": 0, 00:13:40.397 "w_mbytes_per_sec": 0 00:13:40.397 }, 00:13:40.397 "block_size": 4096, 00:13:40.397 "claimed": false, 00:13:40.397 "driver_specific": { 00:13:40.397 "lvol": { 00:13:40.397 "base_bdev": "aio_bdev", 00:13:40.397 "clone": false, 00:13:40.397 "esnap_clone": false, 00:13:40.397 "lvol_store_uuid": "cbc7e197-400d-448f-a52f-7330f4d1d8b7", 00:13:40.397 "num_allocated_clusters": 38, 00:13:40.397 "snapshot": false, 00:13:40.397 "thin_provision": false 00:13:40.397 } 00:13:40.397 }, 00:13:40.397 "name": "aa71ba15-e78c-4832-99b5-6abdf6001312", 00:13:40.397 "num_blocks": 38912, 00:13:40.397 "product_name": "Logical Volume", 00:13:40.397 "supported_io_types": { 00:13:40.397 "abort": false, 00:13:40.397 "compare": false, 00:13:40.397 "compare_and_write": false, 00:13:40.397 "copy": false, 00:13:40.397 "flush": false, 00:13:40.397 "get_zone_info": false, 00:13:40.397 "nvme_admin": false, 00:13:40.397 "nvme_io": false, 00:13:40.397 "nvme_io_md": false, 00:13:40.397 "nvme_iov_md": false, 00:13:40.397 "read": true, 00:13:40.397 "reset": true, 00:13:40.397 "seek_data": true, 00:13:40.397 "seek_hole": true, 00:13:40.397 "unmap": true, 00:13:40.397 "write": true, 00:13:40.397 "write_zeroes": true, 00:13:40.397 "zcopy": false, 00:13:40.397 "zone_append": false, 00:13:40.397 "zone_management": false 00:13:40.397 }, 00:13:40.397 "uuid": "aa71ba15-e78c-4832-99b5-6abdf6001312", 00:13:40.397 "zoned": false 00:13:40.397 } 00:13:40.397 ] 00:13:40.397 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:40.397 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:40.397 18:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:40.654 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:40.654 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:40.654 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:40.910 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:40.910 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:41.167 [2024-07-15 18:39:15.448891] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:41.167 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:41.425 2024/07/15 18:39:15 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:cbc7e197-400d-448f-a52f-7330f4d1d8b7], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:13:41.425 request: 00:13:41.425 { 00:13:41.425 "method": "bdev_lvol_get_lvstores", 00:13:41.425 "params": { 00:13:41.425 "uuid": "cbc7e197-400d-448f-a52f-7330f4d1d8b7" 00:13:41.425 } 00:13:41.425 } 00:13:41.425 Got JSON-RPC error response 00:13:41.425 GoRPCClient: error on JSON-RPC call 00:13:41.425 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:41.425 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.425 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.425 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.425 18:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:41.683 aio_bdev 00:13:41.683 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa71ba15-e78c-4832-99b5-6abdf6001312 00:13:41.683 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=aa71ba15-e78c-4832-99b5-6abdf6001312 00:13:41.683 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:41.683 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:41.683 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:41.683 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:41.683 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:41.942 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa71ba15-e78c-4832-99b5-6abdf6001312 -t 2000 00:13:42.200 [ 00:13:42.200 { 00:13:42.200 "aliases": [ 00:13:42.200 "lvs/lvol" 00:13:42.200 ], 00:13:42.200 "assigned_rate_limits": { 00:13:42.200 "r_mbytes_per_sec": 0, 00:13:42.200 "rw_ios_per_sec": 0, 00:13:42.200 "rw_mbytes_per_sec": 0, 00:13:42.200 "w_mbytes_per_sec": 0 00:13:42.200 }, 00:13:42.200 "block_size": 4096, 00:13:42.200 "claimed": false, 00:13:42.200 "driver_specific": { 00:13:42.200 "lvol": { 00:13:42.200 "base_bdev": "aio_bdev", 00:13:42.200 "clone": false, 00:13:42.200 "esnap_clone": false, 00:13:42.200 "lvol_store_uuid": "cbc7e197-400d-448f-a52f-7330f4d1d8b7", 00:13:42.200 "num_allocated_clusters": 38, 00:13:42.200 "snapshot": false, 00:13:42.200 "thin_provision": false 00:13:42.200 } 00:13:42.200 }, 00:13:42.200 "name": "aa71ba15-e78c-4832-99b5-6abdf6001312", 00:13:42.200 "num_blocks": 38912, 00:13:42.200 "product_name": "Logical Volume", 00:13:42.200 "supported_io_types": { 00:13:42.200 "abort": false, 00:13:42.200 "compare": false, 00:13:42.200 "compare_and_write": false, 00:13:42.200 "copy": false, 00:13:42.200 "flush": false, 00:13:42.200 "get_zone_info": false, 00:13:42.200 "nvme_admin": false, 00:13:42.200 "nvme_io": false, 00:13:42.200 "nvme_io_md": false, 00:13:42.200 "nvme_iov_md": false, 00:13:42.200 "read": true, 00:13:42.200 "reset": true, 00:13:42.200 "seek_data": true, 00:13:42.200 "seek_hole": true, 00:13:42.200 "unmap": true, 00:13:42.200 "write": true, 00:13:42.200 "write_zeroes": true, 00:13:42.200 "zcopy": false, 00:13:42.200 "zone_append": false, 00:13:42.200 "zone_management": false 00:13:42.200 }, 00:13:42.200 "uuid": "aa71ba15-e78c-4832-99b5-6abdf6001312", 00:13:42.200 "zoned": false 00:13:42.200 } 00:13:42.200 ] 00:13:42.200 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:42.200 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:42.200 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:42.458 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:42.458 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:42.458 18:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:42.715 18:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:42.715 18:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete aa71ba15-e78c-4832-99b5-6abdf6001312 00:13:42.972 18:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cbc7e197-400d-448f-a52f-7330f4d1d8b7 00:13:43.230 18:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:43.488 18:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:43.746 ************************************ 00:13:43.746 END TEST lvs_grow_dirty 00:13:43.746 ************************************ 00:13:43.746 00:13:43.746 real 0m21.114s 00:13:43.746 user 0m42.949s 00:13:43.746 sys 0m8.253s 00:13:43.746 18:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.746 18:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:44.028 nvmf_trace.0 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.028 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.286 rmmod nvme_tcp 00:13:44.286 rmmod nvme_fabrics 00:13:44.286 rmmod nvme_keyring 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74749 ']' 00:13:44.286 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74749 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74749 ']' 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74749 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74749 00:13:44.287 killing process with pid 74749 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74749' 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74749 00:13:44.287 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74749 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:44.544 00:13:44.544 real 0m42.484s 00:13:44.544 user 1m6.588s 00:13:44.544 sys 0m12.006s 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.544 18:39:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.544 ************************************ 00:13:44.544 END TEST nvmf_lvs_grow 00:13:44.544 ************************************ 00:13:44.544 18:39:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:44.544 18:39:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:44.544 18:39:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.544 18:39:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.544 18:39:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.544 ************************************ 00:13:44.544 START TEST nvmf_bdev_io_wait 00:13:44.544 ************************************ 00:13:44.544 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:44.804 * Looking for test storage... 00:13:44.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:44.804 Cannot find device "nvmf_tgt_br" 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.804 Cannot find device "nvmf_tgt_br2" 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:44.804 Cannot find device "nvmf_tgt_br" 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:44.804 Cannot find device "nvmf_tgt_br2" 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.804 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:45.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:13:45.063 00:13:45.063 --- 10.0.0.2 ping statistics --- 00:13:45.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.063 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:45.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:45.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:13:45.063 00:13:45.063 --- 10.0.0.3 ping statistics --- 00:13:45.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.063 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:45.063 00:13:45.063 --- 10.0.0.1 ping statistics --- 00:13:45.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.063 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=75166 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 75166 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 75166 ']' 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.063 18:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:45.323 [2024-07-15 18:39:19.578758] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:45.323 [2024-07-15 18:39:19.578866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.323 [2024-07-15 18:39:19.731586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.582 [2024-07-15 18:39:19.858875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.582 [2024-07-15 18:39:19.858960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.582 [2024-07-15 18:39:19.858976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.582 [2024-07-15 18:39:19.858989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.582 [2024-07-15 18:39:19.859001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.582 [2024-07-15 18:39:19.859857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.582 [2024-07-15 18:39:19.860022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.582 [2024-07-15 18:39:19.860077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.582 [2024-07-15 18:39:19.860087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.149 [2024-07-15 18:39:20.619928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.149 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.407 Malloc0 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.407 [2024-07-15 18:39:20.683957] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=75219 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.407 { 00:13:46.407 "params": { 00:13:46.407 "name": "Nvme$subsystem", 00:13:46.407 "trtype": "$TEST_TRANSPORT", 00:13:46.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.407 "adrfam": "ipv4", 00:13:46.407 "trsvcid": "$NVMF_PORT", 00:13:46.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.407 "hdgst": ${hdgst:-false}, 00:13:46.407 "ddgst": ${ddgst:-false} 00:13:46.407 }, 00:13:46.407 "method": "bdev_nvme_attach_controller" 00:13:46.407 } 00:13:46.407 EOF 00:13:46.407 )") 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=75221 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.407 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.407 { 00:13:46.407 "params": { 00:13:46.407 "name": "Nvme$subsystem", 00:13:46.407 "trtype": "$TEST_TRANSPORT", 00:13:46.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.407 "adrfam": "ipv4", 00:13:46.407 "trsvcid": "$NVMF_PORT", 00:13:46.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.408 "hdgst": ${hdgst:-false}, 00:13:46.408 "ddgst": ${ddgst:-false} 00:13:46.408 }, 00:13:46.408 "method": "bdev_nvme_attach_controller" 00:13:46.408 } 00:13:46.408 EOF 00:13:46.408 )") 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75225 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.408 { 00:13:46.408 "params": { 00:13:46.408 "name": "Nvme$subsystem", 00:13:46.408 "trtype": "$TEST_TRANSPORT", 00:13:46.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.408 "adrfam": "ipv4", 00:13:46.408 "trsvcid": "$NVMF_PORT", 00:13:46.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.408 "hdgst": ${hdgst:-false}, 00:13:46.408 "ddgst": ${ddgst:-false} 00:13:46.408 }, 00:13:46.408 "method": "bdev_nvme_attach_controller" 00:13:46.408 } 00:13:46.408 EOF 00:13:46.408 )") 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75228 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.408 { 00:13:46.408 "params": { 00:13:46.408 "name": "Nvme$subsystem", 00:13:46.408 "trtype": "$TEST_TRANSPORT", 00:13:46.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.408 "adrfam": "ipv4", 00:13:46.408 "trsvcid": "$NVMF_PORT", 00:13:46.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.408 "hdgst": ${hdgst:-false}, 00:13:46.408 "ddgst": ${ddgst:-false} 00:13:46.408 }, 00:13:46.408 "method": "bdev_nvme_attach_controller" 00:13:46.408 } 00:13:46.408 EOF 00:13:46.408 )") 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.408 "params": { 00:13:46.408 "name": "Nvme1", 00:13:46.408 "trtype": "tcp", 00:13:46.408 "traddr": "10.0.0.2", 00:13:46.408 "adrfam": "ipv4", 00:13:46.408 "trsvcid": "4420", 00:13:46.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.408 "hdgst": false, 00:13:46.408 "ddgst": false 00:13:46.408 }, 00:13:46.408 "method": "bdev_nvme_attach_controller" 00:13:46.408 }' 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.408 "params": { 00:13:46.408 "name": "Nvme1", 00:13:46.408 "trtype": "tcp", 00:13:46.408 "traddr": "10.0.0.2", 00:13:46.408 "adrfam": "ipv4", 00:13:46.408 "trsvcid": "4420", 00:13:46.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.408 "hdgst": false, 00:13:46.408 "ddgst": false 00:13:46.408 }, 00:13:46.408 "method": "bdev_nvme_attach_controller" 00:13:46.408 }' 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.408 "params": { 00:13:46.408 "name": "Nvme1", 00:13:46.408 "trtype": "tcp", 00:13:46.408 "traddr": "10.0.0.2", 00:13:46.408 "adrfam": "ipv4", 00:13:46.408 "trsvcid": "4420", 00:13:46.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.408 "hdgst": false, 00:13:46.408 "ddgst": false 00:13:46.408 }, 00:13:46.408 "method": "bdev_nvme_attach_controller" 00:13:46.408 }' 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.408 "params": { 00:13:46.408 "name": "Nvme1", 00:13:46.408 "trtype": "tcp", 00:13:46.408 "traddr": "10.0.0.2", 00:13:46.408 "adrfam": "ipv4", 00:13:46.408 "trsvcid": "4420", 00:13:46.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.408 "hdgst": false, 00:13:46.408 "ddgst": false 00:13:46.408 }, 00:13:46.408 "method": "bdev_nvme_attach_controller" 00:13:46.408 }' 00:13:46.408 [2024-07-15 18:39:20.742842] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:46.408 [2024-07-15 18:39:20.742927] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:46.408 [2024-07-15 18:39:20.757253] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:46.408 [2024-07-15 18:39:20.757332] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:46.408 [2024-07-15 18:39:20.763620] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:46.408 [2024-07-15 18:39:20.763713] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:46.408 18:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 75219 00:13:46.408 [2024-07-15 18:39:20.782894] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:46.408 [2024-07-15 18:39:20.783012] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:46.666 [2024-07-15 18:39:20.987407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.666 [2024-07-15 18:39:21.036525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.666 [2024-07-15 18:39:21.100459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:46.666 [2024-07-15 18:39:21.108816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.924 [2024-07-15 18:39:21.149403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:46.924 [2024-07-15 18:39:21.191418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.924 [2024-07-15 18:39:21.207895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:46.924 Running I/O for 1 seconds... 00:13:46.924 [2024-07-15 18:39:21.276469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:46.924 Running I/O for 1 seconds... 00:13:46.924 Running I/O for 1 seconds... 00:13:47.182 Running I/O for 1 seconds... 00:13:48.115 00:13:48.115 Latency(us) 00:13:48.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.115 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:48.115 Nvme1n1 : 1.00 199528.35 779.41 0.00 0.00 639.12 271.12 1100.07 00:13:48.115 =================================================================================================================== 00:13:48.115 Total : 199528.35 779.41 0.00 0.00 639.12 271.12 1100.07 00:13:48.115 00:13:48.115 Latency(us) 00:13:48.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.115 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:48.115 Nvme1n1 : 1.01 9409.14 36.75 0.00 0.00 13551.77 7146.54 20097.71 00:13:48.115 =================================================================================================================== 00:13:48.115 Total : 9409.14 36.75 0.00 0.00 13551.77 7146.54 20097.71 00:13:48.115 00:13:48.115 Latency(us) 00:13:48.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.115 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:48.115 Nvme1n1 : 1.02 4199.74 16.41 0.00 0.00 30197.85 11546.82 36200.84 00:13:48.115 =================================================================================================================== 00:13:48.115 Total : 4199.74 16.41 0.00 0.00 30197.85 11546.82 36200.84 00:13:48.115 00:13:48.115 Latency(us) 00:13:48.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.115 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:48.115 Nvme1n1 : 1.01 6327.47 24.72 0.00 0.00 20146.64 6834.47 32955.25 00:13:48.115 =================================================================================================================== 00:13:48.115 Total : 6327.47 24.72 0.00 0.00 20146.64 6834.47 32955.25 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 75221 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 75225 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 75228 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.373 rmmod nvme_tcp 00:13:48.373 rmmod nvme_fabrics 00:13:48.373 rmmod nvme_keyring 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 75166 ']' 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 75166 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 75166 ']' 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 75166 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75166 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75166' 00:13:48.373 killing process with pid 75166 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 75166 00:13:48.373 18:39:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 75166 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.631 00:13:48.631 real 0m4.080s 00:13:48.631 user 0m17.722s 00:13:48.631 sys 0m2.188s 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.631 18:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:48.631 ************************************ 00:13:48.631 END TEST nvmf_bdev_io_wait 00:13:48.631 ************************************ 00:13:48.897 18:39:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:48.897 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:48.897 18:39:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:48.897 18:39:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.897 18:39:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.897 ************************************ 00:13:48.897 START TEST nvmf_queue_depth 00:13:48.897 ************************************ 00:13:48.897 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:48.897 * Looking for test storage... 00:13:48.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.897 18:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.897 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:48.897 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.898 Cannot find device "nvmf_tgt_br" 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.898 Cannot find device "nvmf_tgt_br2" 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.898 Cannot find device "nvmf_tgt_br" 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.898 Cannot find device "nvmf_tgt_br2" 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:13:48.898 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:49.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:13:49.160 00:13:49.160 --- 10.0.0.2 ping statistics --- 00:13:49.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.160 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:49.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:49.160 00:13:49.160 --- 10.0.0.3 ping statistics --- 00:13:49.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.160 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:49.160 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:13:49.418 00:13:49.418 --- 10.0.0.1 ping statistics --- 00:13:49.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.418 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75458 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75458 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75458 ']' 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.418 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.419 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.419 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.419 18:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.419 [2024-07-15 18:39:23.725392] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:49.419 [2024-07-15 18:39:23.725472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.419 [2024-07-15 18:39:23.864865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.676 [2024-07-15 18:39:23.967593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.676 [2024-07-15 18:39:23.967640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.676 [2024-07-15 18:39:23.967651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.676 [2024-07-15 18:39:23.967660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.676 [2024-07-15 18:39:23.967668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.676 [2024-07-15 18:39:23.967702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.243 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.243 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:50.243 18:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.243 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.243 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.500 [2024-07-15 18:39:24.773397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.500 Malloc0 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.500 [2024-07-15 18:39:24.844689] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75508 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75508 /var/tmp/bdevperf.sock 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75508 ']' 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.500 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.500 [2024-07-15 18:39:24.906209] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:50.500 [2024-07-15 18:39:24.906313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75508 ] 00:13:50.757 [2024-07-15 18:39:25.055211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.757 [2024-07-15 18:39:25.173460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.689 18:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.689 18:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:51.689 18:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:51.689 18:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.689 18:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:51.689 NVMe0n1 00:13:51.689 18:39:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.689 18:39:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:51.689 Running I/O for 10 seconds... 00:14:03.879 00:14:03.879 Latency(us) 00:14:03.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.879 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:03.879 Verification LBA range: start 0x0 length 0x4000 00:14:03.879 NVMe0n1 : 10.05 9872.01 38.56 0.00 0.00 103347.21 16976.94 82388.11 00:14:03.879 =================================================================================================================== 00:14:03.879 Total : 9872.01 38.56 0.00 0.00 103347.21 16976.94 82388.11 00:14:03.879 0 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75508 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75508 ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75508 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75508 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75508' 00:14:03.879 killing process with pid 75508 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75508 00:14:03.879 Received shutdown signal, test time was about 10.000000 seconds 00:14:03.879 00:14:03.879 Latency(us) 00:14:03.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.879 =================================================================================================================== 00:14:03.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75508 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.879 rmmod nvme_tcp 00:14:03.879 rmmod nvme_fabrics 00:14:03.879 rmmod nvme_keyring 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75458 ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75458 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75458 ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75458 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75458 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75458' 00:14:03.879 killing process with pid 75458 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75458 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75458 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:03.879 00:14:03.879 real 0m13.706s 00:14:03.879 user 0m23.487s 00:14:03.879 sys 0m2.282s 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.879 ************************************ 00:14:03.879 END TEST nvmf_queue_depth 00:14:03.879 ************************************ 00:14:03.879 18:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:03.879 18:39:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:03.879 18:39:36 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:03.879 18:39:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:03.879 18:39:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.879 18:39:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.879 ************************************ 00:14:03.879 START TEST nvmf_target_multipath 00:14:03.879 ************************************ 00:14:03.879 18:39:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:03.879 * Looking for test storage... 00:14:03.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:03.879 18:39:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:03.879 18:39:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.879 18:39:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:03.880 Cannot find device "nvmf_tgt_br" 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.880 Cannot find device "nvmf_tgt_br2" 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:03.880 Cannot find device "nvmf_tgt_br" 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:03.880 Cannot find device "nvmf_tgt_br2" 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:03.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:03.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:03.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:14:03.880 00:14:03.880 --- 10.0.0.2 ping statistics --- 00:14:03.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.880 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:03.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:03.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:03.880 00:14:03.880 --- 10.0.0.3 ping statistics --- 00:14:03.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.880 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:03.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:03.880 00:14:03.880 --- 10.0.0.1 ping statistics --- 00:14:03.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.880 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.880 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75840 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75840 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75840 ']' 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.881 18:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:03.881 [2024-07-15 18:39:37.491504] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:03.881 [2024-07-15 18:39:37.491598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.881 [2024-07-15 18:39:37.631118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.881 [2024-07-15 18:39:37.804293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.881 [2024-07-15 18:39:37.804384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.881 [2024-07-15 18:39:37.804399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.881 [2024-07-15 18:39:37.804413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.881 [2024-07-15 18:39:37.804424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.881 [2024-07-15 18:39:37.804661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.881 [2024-07-15 18:39:37.804892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.881 [2024-07-15 18:39:37.805824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.881 [2024-07-15 18:39:37.805832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.139 18:39:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.139 18:39:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:14:04.140 18:39:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.140 18:39:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.140 18:39:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:04.140 18:39:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.140 18:39:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:04.398 [2024-07-15 18:39:38.663745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.398 18:39:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:04.656 Malloc0 00:14:04.656 18:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:04.913 18:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:05.171 18:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.171 [2024-07-15 18:39:39.633637] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.429 18:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:05.429 [2024-07-15 18:39:39.850041] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.429 18:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:05.687 18:39:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:05.945 18:39:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.945 18:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:14:05.945 18:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.945 18:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:05.945 18:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:07.845 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:07.846 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:08.104 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:08.104 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:08.104 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:08.104 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:14:08.104 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75983 00:14:08.104 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:08.104 18:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:14:08.104 [global] 00:14:08.104 thread=1 00:14:08.104 invalidate=1 00:14:08.104 rw=randrw 00:14:08.104 time_based=1 00:14:08.104 runtime=6 00:14:08.104 ioengine=libaio 00:14:08.104 direct=1 00:14:08.104 bs=4096 00:14:08.104 iodepth=128 00:14:08.104 norandommap=0 00:14:08.104 numjobs=1 00:14:08.104 00:14:08.104 verify_dump=1 00:14:08.104 verify_backlog=512 00:14:08.104 verify_state_save=0 00:14:08.104 do_verify=1 00:14:08.104 verify=crc32c-intel 00:14:08.104 [job0] 00:14:08.104 filename=/dev/nvme0n1 00:14:08.104 Could not set queue depth (nvme0n1) 00:14:08.104 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.104 fio-3.35 00:14:08.104 Starting 1 thread 00:14:09.037 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:09.295 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:09.551 18:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:10.484 18:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:10.484 18:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:10.484 18:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:10.484 18:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:11.049 18:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:11.983 18:39:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:11.983 18:39:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:11.983 18:39:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:11.983 18:39:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75983 00:14:14.511 00:14:14.511 job0: (groupid=0, jobs=1): err= 0: pid=76004: Mon Jul 15 18:39:48 2024 00:14:14.511 read: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(288MiB/6004msec) 00:14:14.511 slat (usec): min=4, max=17423, avg=47.05, stdev=229.05 00:14:14.511 clat (usec): min=965, max=48781, avg=7185.22, stdev=1829.17 00:14:14.511 lat (usec): min=1924, max=48819, avg=7232.28, stdev=1840.82 00:14:14.511 clat percentiles (usec): 00:14:14.511 | 1.00th=[ 4359], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6390], 00:14:14.511 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7177], 00:14:14.511 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 9110], 00:14:14.511 | 99.00th=[10945], 99.50th=[11469], 99.90th=[37487], 99.95th=[47449], 00:14:14.511 | 99.99th=[47973] 00:14:14.511 bw ( KiB/s): min=15016, max=33480, per=53.06%, avg=26088.73, stdev=6181.14, samples=11 00:14:14.511 iops : min= 3754, max= 8370, avg=6522.18, stdev=1545.29, samples=11 00:14:14.511 write: IOPS=7249, BW=28.3MiB/s (29.7MB/s)(148MiB/5224msec); 0 zone resets 00:14:14.511 slat (usec): min=7, max=3116, avg=57.35, stdev=140.93 00:14:14.511 clat (usec): min=414, max=12581, avg=6099.23, stdev=948.68 00:14:14.511 lat (usec): min=479, max=12610, avg=6156.58, stdev=953.77 00:14:14.511 clat percentiles (usec): 00:14:14.511 | 1.00th=[ 3589], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5473], 00:14:14.511 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6259], 00:14:14.511 | 70.00th=[ 6456], 80.00th=[ 6718], 90.00th=[ 7046], 95.00th=[ 7439], 00:14:14.511 | 99.00th=[ 9110], 99.50th=[ 9896], 99.90th=[11338], 99.95th=[11731], 00:14:14.511 | 99.99th=[11994] 00:14:14.511 bw ( KiB/s): min=15048, max=32768, per=89.93%, avg=26078.55, stdev=5874.26, samples=11 00:14:14.511 iops : min= 3762, max= 8192, avg=6519.64, stdev=1468.56, samples=11 00:14:14.511 lat (usec) : 500=0.01%, 1000=0.01% 00:14:14.511 lat (msec) : 2=0.01%, 4=1.12%, 10=97.03%, 20=1.67%, 50=0.15% 00:14:14.511 cpu : usr=5.36%, sys=22.71%, ctx=7193, majf=0, minf=108 00:14:14.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:14.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:14.511 issued rwts: total=73799,37870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:14.511 00:14:14.511 Run status group 0 (all jobs): 00:14:14.511 READ: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=288MiB (302MB), run=6004-6004msec 00:14:14.511 WRITE: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=148MiB (155MB), run=5224-5224msec 00:14:14.511 00:14:14.511 Disk stats (read/write): 00:14:14.511 nvme0n1: ios=72279/37870, merge=0/0, ticks=483239/215458, in_queue=698697, util=98.60% 00:14:14.511 18:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:14.511 18:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:14:14.770 18:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:16.142 18:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:16.143 18:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:16.143 18:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:16.143 18:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:14:16.143 18:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:16.143 18:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76139 00:14:16.143 18:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:14:16.143 [global] 00:14:16.143 thread=1 00:14:16.143 invalidate=1 00:14:16.143 rw=randrw 00:14:16.143 time_based=1 00:14:16.143 runtime=6 00:14:16.143 ioengine=libaio 00:14:16.143 direct=1 00:14:16.143 bs=4096 00:14:16.143 iodepth=128 00:14:16.143 norandommap=0 00:14:16.143 numjobs=1 00:14:16.143 00:14:16.143 verify_dump=1 00:14:16.143 verify_backlog=512 00:14:16.143 verify_state_save=0 00:14:16.143 do_verify=1 00:14:16.143 verify=crc32c-intel 00:14:16.143 [job0] 00:14:16.143 filename=/dev/nvme0n1 00:14:16.143 Could not set queue depth (nvme0n1) 00:14:16.143 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.143 fio-3.35 00:14:16.143 Starting 1 thread 00:14:17.075 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:17.075 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:17.332 18:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:18.320 18:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:18.320 18:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:18.320 18:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:18.320 18:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:18.893 18:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:20.264 18:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:20.264 18:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:20.264 18:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:20.264 18:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76139 00:14:22.210 00:14:22.210 job0: (groupid=0, jobs=1): err= 0: pid=76160: Mon Jul 15 18:39:56 2024 00:14:22.210 read: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(279MiB/6003msec) 00:14:22.210 slat (usec): min=5, max=6013, avg=41.19, stdev=194.75 00:14:22.210 clat (usec): min=276, max=50438, avg=7266.78, stdev=2405.95 00:14:22.210 lat (usec): min=290, max=50447, avg=7307.97, stdev=2411.93 00:14:22.210 clat percentiles (usec): 00:14:22.210 | 1.00th=[ 1614], 5.00th=[ 3916], 10.00th=[ 4948], 20.00th=[ 6128], 00:14:22.210 | 30.00th=[ 6587], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7504], 00:14:22.210 | 70.00th=[ 7767], 80.00th=[ 8160], 90.00th=[ 9110], 95.00th=[10814], 00:14:22.210 | 99.00th=[15270], 99.50th=[16581], 99.90th=[36963], 99.95th=[39060], 00:14:22.210 | 99.99th=[46400] 00:14:22.210 bw ( KiB/s): min= 9976, max=37224, per=55.83%, avg=26574.55, stdev=8043.65, samples=11 00:14:22.210 iops : min= 2494, max= 9306, avg=6643.64, stdev=2010.91, samples=11 00:14:22.210 write: IOPS=7473, BW=29.2MiB/s (30.6MB/s)(154MiB/5280msec); 0 zone resets 00:14:22.210 slat (usec): min=11, max=32340, avg=52.02, stdev=203.84 00:14:22.210 clat (usec): min=169, max=39733, avg=6156.43, stdev=2530.13 00:14:22.210 lat (usec): min=226, max=39758, avg=6208.45, stdev=2538.63 00:14:22.210 clat percentiles (usec): 00:14:22.210 | 1.00th=[ 1037], 5.00th=[ 2540], 10.00th=[ 3589], 20.00th=[ 4686], 00:14:22.210 | 30.00th=[ 5604], 40.00th=[ 5932], 50.00th=[ 6194], 60.00th=[ 6456], 00:14:22.210 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 7635], 95.00th=[10552], 00:14:22.210 | 99.00th=[13698], 99.50th=[14615], 99.90th=[37487], 99.95th=[38536], 00:14:22.210 | 99.99th=[39584] 00:14:22.210 bw ( KiB/s): min=10144, max=38096, per=88.77%, avg=26538.18, stdev=7832.89, samples=11 00:14:22.210 iops : min= 2536, max= 9524, avg=6634.55, stdev=1958.22, samples=11 00:14:22.210 lat (usec) : 250=0.01%, 500=0.05%, 750=0.15%, 1000=0.31% 00:14:22.210 lat (msec) : 2=1.92%, 4=5.90%, 10=85.44%, 20=6.05%, 50=0.17% 00:14:22.210 lat (msec) : 100=0.01% 00:14:22.210 cpu : usr=4.92%, sys=23.56%, ctx=7903, majf=0, minf=181 00:14:22.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:22.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:22.210 issued rwts: total=71432,39460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:22.210 00:14:22.210 Run status group 0 (all jobs): 00:14:22.210 READ: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=279MiB (293MB), run=6003-6003msec 00:14:22.210 WRITE: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=154MiB (162MB), run=5280-5280msec 00:14:22.210 00:14:22.210 Disk stats (read/write): 00:14:22.210 nvme0n1: ios=69854/39371, merge=0/0, ticks=477914/225616, in_queue=703530, util=98.70% 00:14:22.210 18:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.469 18:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:22.728 18:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.728 18:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:22.728 18:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.728 18:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.728 rmmod nvme_tcp 00:14:22.728 rmmod nvme_fabrics 00:14:22.728 rmmod nvme_keyring 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75840 ']' 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75840 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75840 ']' 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75840 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75840 00:14:22.728 killing process with pid 75840 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:22.728 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75840' 00:14:22.729 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75840 00:14:22.729 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75840 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:22.987 00:14:22.987 real 0m20.413s 00:14:22.987 user 1m18.885s 00:14:22.987 sys 0m6.965s 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:22.987 18:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:22.987 ************************************ 00:14:22.987 END TEST nvmf_target_multipath 00:14:22.987 ************************************ 00:14:22.987 18:39:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:22.987 18:39:57 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:22.987 18:39:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:22.987 18:39:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.987 18:39:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.987 ************************************ 00:14:22.987 START TEST nvmf_zcopy 00:14:22.987 ************************************ 00:14:22.987 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:22.987 * Looking for test storage... 00:14:23.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.246 18:39:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:23.247 Cannot find device "nvmf_tgt_br" 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.247 Cannot find device "nvmf_tgt_br2" 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:23.247 Cannot find device "nvmf_tgt_br" 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:23.247 Cannot find device "nvmf_tgt_br2" 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.247 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.505 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:23.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:23.506 00:14:23.506 --- 10.0.0.2 ping statistics --- 00:14:23.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.506 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:23.506 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.506 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:23.506 00:14:23.506 --- 10.0.0.3 ping statistics --- 00:14:23.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.506 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:23.506 00:14:23.506 --- 10.0.0.1 ping statistics --- 00:14:23.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.506 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76440 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76440 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76440 ']' 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.506 18:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:23.764 [2024-07-15 18:39:58.025092] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:23.764 [2024-07-15 18:39:58.025208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.764 [2024-07-15 18:39:58.168789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.022 [2024-07-15 18:39:58.285098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.022 [2024-07-15 18:39:58.285157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.022 [2024-07-15 18:39:58.285172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.022 [2024-07-15 18:39:58.285185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.022 [2024-07-15 18:39:58.285196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.022 [2024-07-15 18:39:58.285230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.588 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 [2024-07-15 18:39:59.078502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 [2024-07-15 18:39:59.094597] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 malloc0 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:24.847 { 00:14:24.847 "params": { 00:14:24.847 "name": "Nvme$subsystem", 00:14:24.847 "trtype": "$TEST_TRANSPORT", 00:14:24.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:24.847 "adrfam": "ipv4", 00:14:24.847 "trsvcid": "$NVMF_PORT", 00:14:24.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:24.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:24.847 "hdgst": ${hdgst:-false}, 00:14:24.847 "ddgst": ${ddgst:-false} 00:14:24.847 }, 00:14:24.847 "method": "bdev_nvme_attach_controller" 00:14:24.847 } 00:14:24.847 EOF 00:14:24.847 )") 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:24.847 18:39:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:24.847 "params": { 00:14:24.847 "name": "Nvme1", 00:14:24.847 "trtype": "tcp", 00:14:24.847 "traddr": "10.0.0.2", 00:14:24.847 "adrfam": "ipv4", 00:14:24.847 "trsvcid": "4420", 00:14:24.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.847 "hdgst": false, 00:14:24.847 "ddgst": false 00:14:24.847 }, 00:14:24.847 "method": "bdev_nvme_attach_controller" 00:14:24.847 }' 00:14:24.847 [2024-07-15 18:39:59.196102] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:24.847 [2024-07-15 18:39:59.196225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76497 ] 00:14:25.106 [2024-07-15 18:39:59.345690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.106 [2024-07-15 18:39:59.465419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.364 Running I/O for 10 seconds... 00:14:35.387 00:14:35.387 Latency(us) 00:14:35.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.387 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:35.387 Verification LBA range: start 0x0 length 0x1000 00:14:35.387 Nvme1n1 : 10.01 7606.96 59.43 0.00 0.00 16777.12 491.52 23842.62 00:14:35.387 =================================================================================================================== 00:14:35.387 Total : 7606.96 59.43 0.00 0.00 16777.12 491.52 23842.62 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76608 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:35.387 { 00:14:35.387 "params": { 00:14:35.387 "name": "Nvme$subsystem", 00:14:35.387 "trtype": "$TEST_TRANSPORT", 00:14:35.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:35.387 "adrfam": "ipv4", 00:14:35.387 "trsvcid": "$NVMF_PORT", 00:14:35.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:35.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:35.387 "hdgst": ${hdgst:-false}, 00:14:35.387 "ddgst": ${ddgst:-false} 00:14:35.387 }, 00:14:35.387 "method": "bdev_nvme_attach_controller" 00:14:35.387 } 00:14:35.387 EOF 00:14:35.387 )") 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:35.387 [2024-07-15 18:40:09.859821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.387 [2024-07-15 18:40:09.859862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:35.387 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:35.387 18:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:35.387 "params": { 00:14:35.387 "name": "Nvme1", 00:14:35.387 "trtype": "tcp", 00:14:35.387 "traddr": "10.0.0.2", 00:14:35.387 "adrfam": "ipv4", 00:14:35.387 "trsvcid": "4420", 00:14:35.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.388 "hdgst": false, 00:14:35.388 "ddgst": false 00:14:35.388 }, 00:14:35.388 "method": "bdev_nvme_attach_controller" 00:14:35.388 }' 00:14:35.648 [2024-07-15 18:40:09.871796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.871826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.883780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.883805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.895783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.895808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.907786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.907812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.915123] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:35.648 [2024-07-15 18:40:09.915215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76608 ] 00:14:35.648 [2024-07-15 18:40:09.919801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.919977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.931807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.931835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.943810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.943839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.955806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.955832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.967813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.967841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.979811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.979837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:09.991816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:09.991842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:10.003825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:10.003853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:10.015825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:10.015853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:10.027827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:10.027855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:10.039827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:10.039853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:10.051832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:10.051857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:10.062193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.648 [2024-07-15 18:40:10.063836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.648 [2024-07-15 18:40:10.063864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.648 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.648 [2024-07-15 18:40:10.079859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.649 [2024-07-15 18:40:10.079898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.649 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.649 [2024-07-15 18:40:10.091869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.649 [2024-07-15 18:40:10.091899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.649 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.649 [2024-07-15 18:40:10.103849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.649 [2024-07-15 18:40:10.103876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.649 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.649 [2024-07-15 18:40:10.115848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.649 [2024-07-15 18:40:10.115874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.649 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.649 [2024-07-15 18:40:10.127845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.649 [2024-07-15 18:40:10.127871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.908 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.908 [2024-07-15 18:40:10.139843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.908 [2024-07-15 18:40:10.139867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.908 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.908 [2024-07-15 18:40:10.151848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.908 [2024-07-15 18:40:10.151874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.908 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.163851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.163878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.175869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.175895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.187851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.187875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.199855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.199880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.211875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.211903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.223870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.223898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.231523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.909 [2024-07-15 18:40:10.235877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.235906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.247880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.247909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.259881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.259910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.271885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.271914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.283891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.283921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.295893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.295921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.307891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.307919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.319897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.319925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.331898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.331926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.343893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.343918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.355901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.355928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.367906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.367933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.909 [2024-07-15 18:40:10.379919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.909 [2024-07-15 18:40:10.379957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.909 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.391956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.391987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.403948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.403986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.415929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.415991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.427948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.427988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.439951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.439993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.451956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.451987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 Running I/O for 5 seconds... 00:14:36.167 [2024-07-15 18:40:10.463937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.463972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.479603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.479638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.494858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.494897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.512183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.512220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.528338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.528373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.545484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.545521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.562260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.562297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.578756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.578794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.594893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.594929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.167 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.167 [2024-07-15 18:40:10.610566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.167 [2024-07-15 18:40:10.610603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.168 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.168 [2024-07-15 18:40:10.629050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.168 [2024-07-15 18:40:10.629079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.168 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.168 [2024-07-15 18:40:10.644392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.168 [2024-07-15 18:40:10.644426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.168 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.660728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.660762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.671567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.671600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.687494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.687527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.704653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.704688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.720784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.720817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.737461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.737495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.753444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.753476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.767581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.767614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.782740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.782768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.799006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.799032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.815976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.816005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.833483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.833522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.848664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.848700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.859584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.859618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.875795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.875828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.890474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.890507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.428 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.428 [2024-07-15 18:40:10.907363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.428 [2024-07-15 18:40:10.907401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:10.924374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:10.924415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:10.942440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:10.942479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:10.958417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:10.958456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:10.969869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:10.969917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:10.987353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:10.987389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.002474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.002509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.013251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.013283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.029203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.029234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.045820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.045866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.062826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.062861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.080092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.080131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.096070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.096110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.113241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.113289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.687 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.687 [2024-07-15 18:40:11.127589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.687 [2024-07-15 18:40:11.127633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.688 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.688 [2024-07-15 18:40:11.144868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.688 [2024-07-15 18:40:11.144914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.688 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.688 [2024-07-15 18:40:11.161039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.688 [2024-07-15 18:40:11.161082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.688 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.182446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.182501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.199787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.199836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.216148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.216193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.233025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.233070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.250592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.250638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.266846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.266889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.283793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.283830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.300052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.300092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.317713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.317755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.331928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.331991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.348568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.348613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.364928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.364984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.381996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.382039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.399112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.399149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.946 [2024-07-15 18:40:11.415835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.946 [2024-07-15 18:40:11.415880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.946 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.431801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.431845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.449026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.449072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.466080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.466125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.482021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.482063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.497770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.497809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.513476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.513513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.528129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.528161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.543934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.543977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.559031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.559063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.575701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.575734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.592871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.592907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.608948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.608996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.620724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.620759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.636164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.636197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.204 [2024-07-15 18:40:11.652257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.204 [2024-07-15 18:40:11.652290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.204 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.205 [2024-07-15 18:40:11.663747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.205 [2024-07-15 18:40:11.663781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.205 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.205 [2024-07-15 18:40:11.679122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.205 [2024-07-15 18:40:11.679154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.205 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.695063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.695095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.709646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.709682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.720420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.720455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.736515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.736548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.751429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.751462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.766190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.766223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.782173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.782206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.796718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.796751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.808117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.808148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.823475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.823508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.839681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.839714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.856076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.856111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.870083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.870122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.887473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.887510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.902266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.902303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.916752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.916786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.467 [2024-07-15 18:40:11.932042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.467 [2024-07-15 18:40:11.932071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.467 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:11.948563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:11.948600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:11.964813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:11.964848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:11.980219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:11.980250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:11.995464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:11.995497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.010696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.010732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.026187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.026223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.044625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.044659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.059669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.059701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.070770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.070803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.086501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.086533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.102059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.102091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.116148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.116181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.131493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.131526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.147992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.148028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.159152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.159187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.175070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.175108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.738 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.738 [2024-07-15 18:40:12.191524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.738 [2024-07-15 18:40:12.191562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.739 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.739 [2024-07-15 18:40:12.208308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.739 [2024-07-15 18:40:12.208348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.739 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.224712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.224759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.241447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.241494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.257722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.257767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.275365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.275408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.291944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.292019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.309481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.309525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.326138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.326180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.343619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.343662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.359194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.359233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.376971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.377008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.392050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.392087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.997 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.997 [2024-07-15 18:40:12.404237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.997 [2024-07-15 18:40:12.404275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.998 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.998 [2024-07-15 18:40:12.420865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.998 [2024-07-15 18:40:12.420909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.998 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.998 [2024-07-15 18:40:12.436556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.998 [2024-07-15 18:40:12.436598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.998 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.998 [2024-07-15 18:40:12.454146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.998 [2024-07-15 18:40:12.454197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.998 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.998 [2024-07-15 18:40:12.469608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.998 [2024-07-15 18:40:12.469658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.998 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.481512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.481559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.497553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.497603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.514335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.514384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.530376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.530423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.542241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.542282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.558514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.558557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.574273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.574315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.591046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.591090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.606072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.606116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.622499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.622547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.640353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.640401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.660457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.660653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.677584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.677754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.693190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.693369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.711133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.711338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.256 [2024-07-15 18:40:12.727412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.256 [2024-07-15 18:40:12.727627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.256 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.744780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.745003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.759987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.760206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.771067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.771234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.788805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.788859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.802738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.802787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.819238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.819292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.835116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.835170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.852683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.852739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.868313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.868364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.879755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.879807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.896769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.896823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.913744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.913797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.930048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.930099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.947942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.948004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.962255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.962304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.978467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.978516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.514 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.514 [2024-07-15 18:40:12.995327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.514 [2024-07-15 18:40:12.995379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.772 2024/07/15 18:40:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.772 [2024-07-15 18:40:13.010864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.772 [2024-07-15 18:40:13.010915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.022866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.022915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.039250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.039294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.055023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.055067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.072471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.072518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.088231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.088275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.105769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.105815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.121699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.121746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.138878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.138925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.155771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.155817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.172018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.172063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.189725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.189771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.206504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.206551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.223969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.224013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.773 [2024-07-15 18:40:13.240129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.773 [2024-07-15 18:40:13.240173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.773 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.031 [2024-07-15 18:40:13.257561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.257610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.273292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.273339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.290838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.290886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.305355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.305405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.321162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.321214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.337986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.338041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.354413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.354459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.371746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.371795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.388021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.388065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.398956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.399015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.415812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.415860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.431194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.431242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.443195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.443237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.458919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.458969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.476081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.476122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.492842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.492886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.032 [2024-07-15 18:40:13.508711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.032 [2024-07-15 18:40:13.508755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.032 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.290 [2024-07-15 18:40:13.525458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.290 [2024-07-15 18:40:13.525501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.290 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.542992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.543043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.559175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.559213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.575623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.575665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.592880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.592924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.608282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.608322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.620029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.620068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.636749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.636786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.653186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.653225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.670236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.670280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.687003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.687044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.704323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.704361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.721210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.721253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.736762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.736801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.749331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.749370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.291 [2024-07-15 18:40:13.760642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.291 [2024-07-15 18:40:13.760682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.291 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.777785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.777831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.794608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.794648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.810887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.810937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.826842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.826878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.842873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.842909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.859520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.859557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.876157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.876197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.892593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.892637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.909385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.909434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.926207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.926256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.941781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.941829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.959768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.959815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.975388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.975439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:13.993504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:13.993556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:14.010248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:14.010305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.550 [2024-07-15 18:40:14.026770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.550 [2024-07-15 18:40:14.026823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.550 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.043103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.043156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.060065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.060115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.076599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.076647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.093206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.093254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.110320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.110371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.127148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.127200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.144272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.144325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.159443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.159508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.175827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.175876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.191633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.191682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.208812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.208861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.225347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.225392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.241921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.241988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.258545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.258593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.275701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.275748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.809 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:39.809 [2024-07-15 18:40:14.291277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.809 [2024-07-15 18:40:14.291322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.302500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.302541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.319445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.319487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.334849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.334898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.350821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.350874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.367529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.367582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.383766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.383821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.400022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.400073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.418301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.418352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.433040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.433090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.449780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.449837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.465637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.465690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.477858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.477928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.493993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.494041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.511419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.511472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.527018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.527069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.068 [2024-07-15 18:40:14.539289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.068 [2024-07-15 18:40:14.539346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.068 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.555885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.555935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.572383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.572429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.589440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.589491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.605775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.605820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.622978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.623019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.639640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.639682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.655831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.655871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.672893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.672936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.689749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.689797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.705892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.705965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.723701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.723759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.738924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.738992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.754894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.754962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.771753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.771804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.788066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.788117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.327 [2024-07-15 18:40:14.799839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.327 [2024-07-15 18:40:14.799885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.327 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.816323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.816377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.832187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.832235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.848512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.848561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.859553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.859598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.875906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.875962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.891101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.891145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.906581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.906628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.923185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.923237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.940290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.940349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.957288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.957339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.973580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.973630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:14.990266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:14.990317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:15.007401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:15.007451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:15.023617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:15.023664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:15.039524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:15.039570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.587 [2024-07-15 18:40:15.053743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.587 [2024-07-15 18:40:15.053790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.587 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.069930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.070005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.084775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.084825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.100724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.100786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.117647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.117711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.134375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.134438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.150389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.150448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.167568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.167626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.184215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.184277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.201120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.201183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.218825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.218896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.234672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.234734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.251814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.251875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.846 [2024-07-15 18:40:15.269109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.846 [2024-07-15 18:40:15.269167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.846 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.847 [2024-07-15 18:40:15.285910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.847 [2024-07-15 18:40:15.285995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.847 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.847 [2024-07-15 18:40:15.303078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.847 [2024-07-15 18:40:15.303145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.847 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:40.847 [2024-07-15 18:40:15.319664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.847 [2024-07-15 18:40:15.319718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.847 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.106 [2024-07-15 18:40:15.336325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.106 [2024-07-15 18:40:15.336378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.106 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.106 [2024-07-15 18:40:15.353373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.106 [2024-07-15 18:40:15.353436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.106 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.106 [2024-07-15 18:40:15.369323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.106 [2024-07-15 18:40:15.369383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.106 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.106 [2024-07-15 18:40:15.386053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.106 [2024-07-15 18:40:15.386106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.106 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.106 [2024-07-15 18:40:15.402149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.106 [2024-07-15 18:40:15.402205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.106 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.106 [2024-07-15 18:40:15.413727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.106 [2024-07-15 18:40:15.413773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.106 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.106 [2024-07-15 18:40:15.429756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.106 [2024-07-15 18:40:15.429809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.446068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.446114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.457910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.457990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.468979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.469026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 00:14:41.107 Latency(us) 00:14:41.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.107 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:41.107 Nvme1n1 : 5.01 13676.50 106.85 0.00 0.00 9348.85 3666.90 17975.59 00:14:41.107 =================================================================================================================== 00:14:41.107 Total : 13676.50 106.85 0.00 0.00 9348.85 3666.90 17975.59 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.480957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.481004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.492967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.493012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.504972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.505016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.516981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.517021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.524935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.524986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.536981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.537024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.548966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.549007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.560998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.561043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.573000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.573043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.107 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.107 [2024-07-15 18:40:15.584985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.107 [2024-07-15 18:40:15.585026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 [2024-07-15 18:40:15.596983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.366 [2024-07-15 18:40:15.597022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 [2024-07-15 18:40:15.608996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.366 [2024-07-15 18:40:15.609035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 [2024-07-15 18:40:15.624979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.366 [2024-07-15 18:40:15.625020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 [2024-07-15 18:40:15.636989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.366 [2024-07-15 18:40:15.637029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 [2024-07-15 18:40:15.649002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.366 [2024-07-15 18:40:15.649042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 [2024-07-15 18:40:15.661004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.366 [2024-07-15 18:40:15.661041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 [2024-07-15 18:40:15.672996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.366 [2024-07-15 18:40:15.673032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.366 2024/07/15 18:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:41.366 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76608) - No such process 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76608 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:41.366 delay0 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.366 18:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:41.624 [2024-07-15 18:40:15.872337] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:48.185 Initializing NVMe Controllers 00:14:48.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:48.185 Initialization complete. Launching workers. 00:14:48.185 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 153 00:14:48.185 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 440, failed to submit 33 00:14:48.185 success 254, unsuccess 186, failed 0 00:14:48.185 18:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:48.185 18:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:48.186 18:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.186 18:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:48.186 18:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.186 18:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:48.186 18:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.186 18:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.186 rmmod nvme_tcp 00:14:48.186 rmmod nvme_fabrics 00:14:48.186 rmmod nvme_keyring 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76440 ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76440 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76440 ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76440 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76440 00:14:48.186 killing process with pid 76440 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76440' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76440 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76440 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:48.186 ************************************ 00:14:48.186 END TEST nvmf_zcopy 00:14:48.186 ************************************ 00:14:48.186 00:14:48.186 real 0m24.944s 00:14:48.186 user 0m39.800s 00:14:48.186 sys 0m7.669s 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.186 18:40:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.186 18:40:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:48.186 18:40:22 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:48.186 18:40:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:48.186 18:40:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.186 18:40:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.186 ************************************ 00:14:48.186 START TEST nvmf_nmic 00:14:48.186 ************************************ 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:48.186 * Looking for test storage... 00:14:48.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:48.186 Cannot find device "nvmf_tgt_br" 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.186 Cannot find device "nvmf_tgt_br2" 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:48.186 Cannot find device "nvmf_tgt_br" 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:48.186 Cannot find device "nvmf_tgt_br2" 00:14:48.186 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.187 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:48.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:48.446 00:14:48.446 --- 10.0.0.2 ping statistics --- 00:14:48.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.446 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:48.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:48.446 00:14:48.446 --- 10.0.0.3 ping statistics --- 00:14:48.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.446 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:48.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:48.446 00:14:48.446 --- 10.0.0.1 ping statistics --- 00:14:48.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.446 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76936 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76936 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76936 ']' 00:14:48.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.446 18:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:48.705 [2024-07-15 18:40:22.982473] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:48.705 [2024-07-15 18:40:22.982573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.705 [2024-07-15 18:40:23.128711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.962 [2024-07-15 18:40:23.247257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.962 [2024-07-15 18:40:23.247546] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.962 [2024-07-15 18:40:23.247782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.962 [2024-07-15 18:40:23.247857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.962 [2024-07-15 18:40:23.247975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.962 [2024-07-15 18:40:23.248185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.962 [2024-07-15 18:40:23.248316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.962 [2024-07-15 18:40:23.249024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.962 [2024-07-15 18:40:23.249027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.527 [2024-07-15 18:40:23.962729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.527 18:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.527 Malloc0 00:14:49.527 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.527 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:49.527 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.527 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.787 [2024-07-15 18:40:24.034336] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.787 test case1: single bdev can't be used in multiple subsystems 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.787 [2024-07-15 18:40:24.058146] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:49.787 [2024-07-15 18:40:24.058188] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:49.787 [2024-07-15 18:40:24.058200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.787 2024/07/15 18:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:49.787 request: 00:14:49.787 { 00:14:49.787 "method": "nvmf_subsystem_add_ns", 00:14:49.787 "params": { 00:14:49.787 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:49.787 "namespace": { 00:14:49.787 "bdev_name": "Malloc0", 00:14:49.787 "no_auto_visible": false 00:14:49.787 } 00:14:49.787 } 00:14:49.787 } 00:14:49.787 Got JSON-RPC error response 00:14:49.787 GoRPCClient: error on JSON-RPC call 00:14:49.787 Adding namespace failed - expected result. 00:14:49.787 test case2: host connect to nvmf target in multiple paths 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.787 [2024-07-15 18:40:24.074290] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.787 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:50.046 18:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.046 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:50.046 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.046 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:50.046 18:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:52.577 18:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:52.577 18:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:52.577 18:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.577 18:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:52.577 18:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.577 18:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:52.577 18:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:52.577 [global] 00:14:52.577 thread=1 00:14:52.577 invalidate=1 00:14:52.577 rw=write 00:14:52.577 time_based=1 00:14:52.577 runtime=1 00:14:52.577 ioengine=libaio 00:14:52.577 direct=1 00:14:52.577 bs=4096 00:14:52.577 iodepth=1 00:14:52.577 norandommap=0 00:14:52.577 numjobs=1 00:14:52.577 00:14:52.577 verify_dump=1 00:14:52.577 verify_backlog=512 00:14:52.577 verify_state_save=0 00:14:52.577 do_verify=1 00:14:52.577 verify=crc32c-intel 00:14:52.577 [job0] 00:14:52.577 filename=/dev/nvme0n1 00:14:52.577 Could not set queue depth (nvme0n1) 00:14:52.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:52.577 fio-3.35 00:14:52.577 Starting 1 thread 00:14:53.512 00:14:53.512 job0: (groupid=0, jobs=1): err= 0: pid=77041: Mon Jul 15 18:40:27 2024 00:14:53.512 read: IOPS=3197, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:14:53.512 slat (nsec): min=8254, max=91527, avg=10469.04, stdev=4804.89 00:14:53.512 clat (usec): min=115, max=373, avg=159.48, stdev=18.97 00:14:53.512 lat (usec): min=123, max=403, avg=169.95, stdev=20.70 00:14:53.512 clat percentiles (usec): 00:14:53.512 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 145], 00:14:53.512 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 163], 00:14:53.512 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:14:53.512 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 247], 99.95th=[ 310], 00:14:53.512 | 99.99th=[ 375] 00:14:53.512 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:14:53.512 slat (usec): min=12, max=162, avg=15.50, stdev= 5.25 00:14:53.512 clat (usec): min=74, max=1489, avg=109.86, stdev=28.44 00:14:53.512 lat (usec): min=87, max=1503, avg=125.36, stdev=29.39 00:14:53.512 clat percentiles (usec): 00:14:53.512 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 96], 00:14:53.512 | 30.00th=[ 100], 40.00th=[ 104], 50.00th=[ 109], 60.00th=[ 113], 00:14:53.512 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 129], 95.00th=[ 137], 00:14:53.512 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 285], 99.95th=[ 359], 00:14:53.512 | 99.99th=[ 1483] 00:14:53.512 bw ( KiB/s): min=14256, max=14256, per=99.54%, avg=14256.00, stdev= 0.00, samples=1 00:14:53.512 iops : min= 3564, max= 3564, avg=3564.00, stdev= 0.00, samples=1 00:14:53.512 lat (usec) : 100=15.46%, 250=84.45%, 500=0.07% 00:14:53.512 lat (msec) : 2=0.01% 00:14:53.512 cpu : usr=1.40%, sys=6.80%, ctx=6789, majf=0, minf=2 00:14:53.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:53.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.512 issued rwts: total=3201,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:53.512 00:14:53.512 Run status group 0 (all jobs): 00:14:53.512 READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:14:53.512 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:14:53.512 00:14:53.512 Disk stats (read/write): 00:14:53.512 nvme0n1: ios=3031/3072, merge=0/0, ticks=490/357, in_queue=847, util=91.18% 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.512 rmmod nvme_tcp 00:14:53.512 rmmod nvme_fabrics 00:14:53.512 rmmod nvme_keyring 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76936 ']' 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76936 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76936 ']' 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76936 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76936 00:14:53.512 killing process with pid 76936 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76936' 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76936 00:14:53.512 18:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76936 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:53.783 00:14:53.783 real 0m5.879s 00:14:53.783 user 0m19.332s 00:14:53.783 sys 0m1.491s 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.783 ************************************ 00:14:53.783 END TEST nvmf_nmic 00:14:53.783 ************************************ 00:14:53.783 18:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.041 18:40:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:54.042 18:40:28 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:54.042 18:40:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:54.042 18:40:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.042 18:40:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:54.042 ************************************ 00:14:54.042 START TEST nvmf_fio_target 00:14:54.042 ************************************ 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:54.042 * Looking for test storage... 00:14:54.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:54.042 Cannot find device "nvmf_tgt_br" 00:14:54.042 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:14:54.043 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.043 Cannot find device "nvmf_tgt_br2" 00:14:54.043 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:14:54.043 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:54.043 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:54.043 Cannot find device "nvmf_tgt_br" 00:14:54.043 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:14:54.043 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:54.043 Cannot find device "nvmf_tgt_br2" 00:14:54.300 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:14:54.300 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:54.300 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:54.300 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.300 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:54.301 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:54.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:14:54.558 00:14:54.558 --- 10.0.0.2 ping statistics --- 00:14:54.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.558 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:54.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:54.558 00:14:54.558 --- 10.0.0.3 ping statistics --- 00:14:54.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.558 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:54.558 00:14:54.558 --- 10.0.0.1 ping statistics --- 00:14:54.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.558 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.558 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=77227 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 77227 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 77227 ']' 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.559 18:40:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.559 [2024-07-15 18:40:28.930469] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:54.559 [2024-07-15 18:40:28.930590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.815 [2024-07-15 18:40:29.073740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.815 [2024-07-15 18:40:29.188455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.815 [2024-07-15 18:40:29.188508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.815 [2024-07-15 18:40:29.188523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.815 [2024-07-15 18:40:29.188536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.815 [2024-07-15 18:40:29.188547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.815 [2024-07-15 18:40:29.188751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.815 [2024-07-15 18:40:29.189246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.815 [2024-07-15 18:40:29.189979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.815 [2024-07-15 18:40:29.189983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.380 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.380 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:55.380 18:40:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.380 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.380 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.380 18:40:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.380 18:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:55.676 [2024-07-15 18:40:30.134572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.934 18:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.191 18:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:56.191 18:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.191 18:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:56.191 18:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.757 18:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:56.757 18:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.014 18:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:57.014 18:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:57.014 18:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.582 18:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:57.582 18:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.841 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:57.841 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.841 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:57.841 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:58.098 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:58.357 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:58.357 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.615 18:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:58.615 18:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.874 18:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.131 [2024-07-15 18:40:33.507809] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.132 18:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:59.390 18:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:59.673 18:40:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.931 18:40:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:59.931 18:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.931 18:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.931 18:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:59.931 18:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:59.931 18:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.829 18:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.829 18:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.829 18:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.829 18:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:01.829 18:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.829 18:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:01.829 18:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:02.087 [global] 00:15:02.087 thread=1 00:15:02.087 invalidate=1 00:15:02.087 rw=write 00:15:02.087 time_based=1 00:15:02.087 runtime=1 00:15:02.087 ioengine=libaio 00:15:02.087 direct=1 00:15:02.087 bs=4096 00:15:02.087 iodepth=1 00:15:02.087 norandommap=0 00:15:02.087 numjobs=1 00:15:02.087 00:15:02.087 verify_dump=1 00:15:02.087 verify_backlog=512 00:15:02.087 verify_state_save=0 00:15:02.087 do_verify=1 00:15:02.087 verify=crc32c-intel 00:15:02.087 [job0] 00:15:02.087 filename=/dev/nvme0n1 00:15:02.087 [job1] 00:15:02.087 filename=/dev/nvme0n2 00:15:02.087 [job2] 00:15:02.087 filename=/dev/nvme0n3 00:15:02.087 [job3] 00:15:02.087 filename=/dev/nvme0n4 00:15:02.087 Could not set queue depth (nvme0n1) 00:15:02.087 Could not set queue depth (nvme0n2) 00:15:02.087 Could not set queue depth (nvme0n3) 00:15:02.087 Could not set queue depth (nvme0n4) 00:15:02.087 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.087 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.087 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.087 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.087 fio-3.35 00:15:02.087 Starting 4 threads 00:15:03.461 00:15:03.461 job0: (groupid=0, jobs=1): err= 0: pid=77520: Mon Jul 15 18:40:37 2024 00:15:03.461 read: IOPS=2201, BW=8807KiB/s (9019kB/s)(8816KiB/1001msec) 00:15:03.461 slat (nsec): min=10138, max=32463, avg=12605.29, stdev=2623.20 00:15:03.461 clat (usec): min=135, max=677, avg=222.45, stdev=36.42 00:15:03.461 lat (usec): min=147, max=693, avg=235.06, stdev=36.60 00:15:03.461 clat percentiles (usec): 00:15:03.461 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 190], 00:15:03.461 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 233], 00:15:03.461 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 281], 00:15:03.461 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 408], 99.95th=[ 506], 00:15:03.461 | 99.99th=[ 676] 00:15:03.461 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:03.461 slat (usec): min=15, max=137, avg=22.06, stdev= 8.42 00:15:03.461 clat (usec): min=99, max=344, avg=164.01, stdev=30.79 00:15:03.461 lat (usec): min=117, max=394, avg=186.07, stdev=33.11 00:15:03.461 clat percentiles (usec): 00:15:03.461 | 1.00th=[ 112], 5.00th=[ 121], 10.00th=[ 127], 20.00th=[ 137], 00:15:03.461 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 169], 00:15:03.461 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 219], 00:15:03.461 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 302], 00:15:03.461 | 99.99th=[ 347] 00:15:03.461 bw ( KiB/s): min=10560, max=10560, per=27.17%, avg=10560.00, stdev= 0.00, samples=1 00:15:03.461 iops : min= 2640, max= 2640, avg=2640.00, stdev= 0.00, samples=1 00:15:03.461 lat (usec) : 100=0.02%, 250=89.95%, 500=9.99%, 750=0.04% 00:15:03.461 cpu : usr=1.50%, sys=5.90%, ctx=4765, majf=0, minf=9 00:15:03.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.461 issued rwts: total=2204,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.461 job1: (groupid=0, jobs=1): err= 0: pid=77521: Mon Jul 15 18:40:37 2024 00:15:03.461 read: IOPS=2167, BW=8671KiB/s (8879kB/s)(8680KiB/1001msec) 00:15:03.461 slat (nsec): min=9930, max=49784, avg=13758.32, stdev=4158.36 00:15:03.461 clat (usec): min=132, max=1687, avg=222.09, stdev=50.47 00:15:03.461 lat (usec): min=144, max=1698, avg=235.85, stdev=50.55 00:15:03.461 clat percentiles (usec): 00:15:03.461 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 188], 00:15:03.461 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 223], 60.00th=[ 231], 00:15:03.461 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 285], 00:15:03.461 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 457], 99.95th=[ 906], 00:15:03.461 | 99.99th=[ 1680] 00:15:03.461 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:03.461 slat (usec): min=14, max=159, avg=22.03, stdev= 8.04 00:15:03.461 clat (usec): min=95, max=795, avg=166.32, stdev=36.02 00:15:03.461 lat (usec): min=113, max=819, avg=188.35, stdev=37.31 00:15:03.461 clat percentiles (usec): 00:15:03.461 | 1.00th=[ 111], 5.00th=[ 121], 10.00th=[ 127], 20.00th=[ 137], 00:15:03.461 | 30.00th=[ 145], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 172], 00:15:03.461 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 225], 00:15:03.461 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 338], 99.95th=[ 635], 00:15:03.461 | 99.99th=[ 799] 00:15:03.461 bw ( KiB/s): min=10488, max=10488, per=26.98%, avg=10488.00, stdev= 0.00, samples=1 00:15:03.461 iops : min= 2622, max= 2622, avg=2622.00, stdev= 0.00, samples=1 00:15:03.461 lat (usec) : 100=0.04%, 250=89.58%, 500=10.30%, 750=0.02%, 1000=0.04% 00:15:03.461 lat (msec) : 2=0.02% 00:15:03.461 cpu : usr=1.00%, sys=6.60%, ctx=4730, majf=0, minf=13 00:15:03.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.461 issued rwts: total=2170,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.461 job2: (groupid=0, jobs=1): err= 0: pid=77522: Mon Jul 15 18:40:37 2024 00:15:03.461 read: IOPS=2084, BW=8340KiB/s (8540kB/s)(8348KiB/1001msec) 00:15:03.461 slat (nsec): min=10400, max=83063, avg=14551.32, stdev=4949.12 00:15:03.461 clat (usec): min=148, max=2593, avg=224.84, stdev=62.93 00:15:03.461 lat (usec): min=162, max=2610, avg=239.39, stdev=63.14 00:15:03.461 clat percentiles (usec): 00:15:03.461 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 196], 00:15:03.461 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:15:03.461 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:15:03.461 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 478], 99.95th=[ 979], 00:15:03.461 | 99.99th=[ 2606] 00:15:03.461 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:03.461 slat (usec): min=15, max=139, avg=21.86, stdev= 5.67 00:15:03.461 clat (usec): min=100, max=655, avg=171.30, stdev=30.93 00:15:03.461 lat (usec): min=121, max=693, avg=193.16, stdev=32.47 00:15:03.461 clat percentiles (usec): 00:15:03.462 | 1.00th=[ 121], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 147], 00:15:03.462 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 176], 00:15:03.462 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 208], 95.00th=[ 219], 00:15:03.462 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 351], 99.95th=[ 603], 00:15:03.462 | 99.99th=[ 660] 00:15:03.462 bw ( KiB/s): min= 9912, max= 9912, per=25.50%, avg=9912.00, stdev= 0.00, samples=1 00:15:03.462 iops : min= 2478, max= 2478, avg=2478.00, stdev= 0.00, samples=1 00:15:03.462 lat (usec) : 250=91.05%, 500=8.87%, 750=0.04%, 1000=0.02% 00:15:03.462 lat (msec) : 4=0.02% 00:15:03.462 cpu : usr=1.40%, sys=6.10%, ctx=4648, majf=0, minf=6 00:15:03.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.462 issued rwts: total=2087,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.462 job3: (groupid=0, jobs=1): err= 0: pid=77523: Mon Jul 15 18:40:37 2024 00:15:03.462 read: IOPS=1592, BW=6370KiB/s (6523kB/s)(6376KiB/1001msec) 00:15:03.462 slat (nsec): min=13148, max=57626, avg=16168.56, stdev=3316.74 00:15:03.462 clat (usec): min=196, max=2446, avg=284.80, stdev=69.62 00:15:03.462 lat (usec): min=211, max=2473, avg=300.97, stdev=70.28 00:15:03.462 clat percentiles (usec): 00:15:03.462 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 249], 00:15:03.462 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 293], 00:15:03.462 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 347], 00:15:03.462 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 988], 99.95th=[ 2442], 00:15:03.462 | 99.99th=[ 2442] 00:15:03.462 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:03.462 slat (usec): min=20, max=196, avg=24.59, stdev= 7.30 00:15:03.462 clat (usec): min=116, max=3701, avg=226.87, stdev=87.72 00:15:03.462 lat (usec): min=137, max=3722, avg=251.46, stdev=88.86 00:15:03.462 clat percentiles (usec): 00:15:03.462 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 196], 00:15:03.462 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 229], 00:15:03.462 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 285], 00:15:03.462 | 99.00th=[ 318], 99.50th=[ 343], 99.90th=[ 799], 99.95th=[ 1037], 00:15:03.462 | 99.99th=[ 3687] 00:15:03.462 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:15:03.462 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:03.462 lat (usec) : 250=52.42%, 500=47.39%, 750=0.03%, 1000=0.08% 00:15:03.462 lat (msec) : 2=0.03%, 4=0.05% 00:15:03.462 cpu : usr=1.90%, sys=5.30%, ctx=3649, majf=0, minf=13 00:15:03.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.462 issued rwts: total=1594,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.462 00:15:03.462 Run status group 0 (all jobs): 00:15:03.462 READ: bw=31.4MiB/s (33.0MB/s), 6370KiB/s-8807KiB/s (6523kB/s-9019kB/s), io=31.5MiB (33.0MB), run=1001-1001msec 00:15:03.462 WRITE: bw=38.0MiB/s (39.8MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=38.0MiB (39.8MB), run=1001-1001msec 00:15:03.462 00:15:03.462 Disk stats (read/write): 00:15:03.462 nvme0n1: ios=1985/2048, merge=0/0, ticks=473/352, in_queue=825, util=86.46% 00:15:03.462 nvme0n2: ios=1941/2048, merge=0/0, ticks=466/358, in_queue=824, util=87.05% 00:15:03.462 nvme0n3: ios=1842/2048, merge=0/0, ticks=427/371, in_queue=798, util=88.87% 00:15:03.462 nvme0n4: ios=1519/1536, merge=0/0, ticks=437/363, in_queue=800, util=89.31% 00:15:03.462 18:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:03.462 [global] 00:15:03.462 thread=1 00:15:03.462 invalidate=1 00:15:03.462 rw=randwrite 00:15:03.462 time_based=1 00:15:03.462 runtime=1 00:15:03.462 ioengine=libaio 00:15:03.462 direct=1 00:15:03.462 bs=4096 00:15:03.462 iodepth=1 00:15:03.462 norandommap=0 00:15:03.462 numjobs=1 00:15:03.462 00:15:03.462 verify_dump=1 00:15:03.462 verify_backlog=512 00:15:03.462 verify_state_save=0 00:15:03.462 do_verify=1 00:15:03.462 verify=crc32c-intel 00:15:03.462 [job0] 00:15:03.462 filename=/dev/nvme0n1 00:15:03.462 [job1] 00:15:03.462 filename=/dev/nvme0n2 00:15:03.462 [job2] 00:15:03.462 filename=/dev/nvme0n3 00:15:03.462 [job3] 00:15:03.462 filename=/dev/nvme0n4 00:15:03.462 Could not set queue depth (nvme0n1) 00:15:03.462 Could not set queue depth (nvme0n2) 00:15:03.462 Could not set queue depth (nvme0n3) 00:15:03.462 Could not set queue depth (nvme0n4) 00:15:03.720 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.720 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.720 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.720 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.720 fio-3.35 00:15:03.720 Starting 4 threads 00:15:04.654 00:15:04.654 job0: (groupid=0, jobs=1): err= 0: pid=77576: Mon Jul 15 18:40:39 2024 00:15:04.654 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:04.654 slat (nsec): min=7093, max=31714, avg=11603.73, stdev=2240.07 00:15:04.654 clat (usec): min=125, max=650, avg=269.23, stdev=91.38 00:15:04.654 lat (usec): min=136, max=659, avg=280.84, stdev=91.48 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 145], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 182], 00:15:04.654 | 30.00th=[ 192], 40.00th=[ 206], 50.00th=[ 265], 60.00th=[ 293], 00:15:04.654 | 70.00th=[ 322], 80.00th=[ 359], 90.00th=[ 396], 95.00th=[ 424], 00:15:04.654 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 603], 99.95th=[ 619], 00:15:04.654 | 99.99th=[ 652] 00:15:04.654 write: IOPS=2161, BW=8647KiB/s (8855kB/s)(8656KiB/1001msec); 0 zone resets 00:15:04.654 slat (nsec): min=9173, max=97786, avg=17660.70, stdev=3933.58 00:15:04.654 clat (usec): min=68, max=1032, avg=176.57, stdev=53.16 00:15:04.654 lat (usec): min=97, max=1052, avg=194.23, stdev=52.85 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 108], 5.00th=[ 124], 10.00th=[ 133], 20.00th=[ 141], 00:15:04.654 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 174], 00:15:04.654 | 70.00th=[ 186], 80.00th=[ 206], 90.00th=[ 251], 95.00th=[ 281], 00:15:04.654 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 594], 99.95th=[ 848], 00:15:04.654 | 99.99th=[ 1037] 00:15:04.654 bw ( KiB/s): min=12288, max=12288, per=51.92%, avg=12288.00, stdev= 0.00, samples=1 00:15:04.654 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:04.654 lat (usec) : 100=0.36%, 250=68.76%, 500=30.20%, 750=0.64%, 1000=0.02% 00:15:04.654 lat (msec) : 2=0.02% 00:15:04.654 cpu : usr=0.90%, sys=4.80%, ctx=4215, majf=0, minf=11 00:15:04.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.654 issued rwts: total=2048,2164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.654 job1: (groupid=0, jobs=1): err= 0: pid=77577: Mon Jul 15 18:40:39 2024 00:15:04.654 read: IOPS=1266, BW=5067KiB/s (5189kB/s)(5072KiB/1001msec) 00:15:04.654 slat (nsec): min=8538, max=54106, avg=13002.28, stdev=3432.19 00:15:04.654 clat (usec): min=173, max=41511, avg=384.78, stdev=1163.42 00:15:04.654 lat (usec): min=181, max=41532, avg=397.78, stdev=1163.78 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 219], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 281], 00:15:04.654 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 334], 60.00th=[ 351], 00:15:04.654 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 578], 00:15:04.654 | 99.00th=[ 725], 99.50th=[ 898], 99.90th=[ 3097], 99.95th=[41681], 00:15:04.654 | 99.99th=[41681] 00:15:04.654 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:04.654 slat (usec): min=9, max=143, avg=20.52, stdev= 8.01 00:15:04.654 clat (usec): min=107, max=588, avg=299.91, stdev=133.11 00:15:04.654 lat (usec): min=121, max=608, avg=320.43, stdev=139.01 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 178], 00:15:04.654 | 30.00th=[ 192], 40.00th=[ 212], 50.00th=[ 249], 60.00th=[ 285], 00:15:04.654 | 70.00th=[ 420], 80.00th=[ 465], 90.00th=[ 498], 95.00th=[ 515], 00:15:04.654 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 586], 00:15:04.654 | 99.99th=[ 586] 00:15:04.654 bw ( KiB/s): min= 4096, max= 4096, per=17.31%, avg=4096.00, stdev= 0.00, samples=1 00:15:04.654 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:04.654 lat (usec) : 250=29.99%, 500=62.23%, 750=7.38%, 1000=0.21% 00:15:04.654 lat (msec) : 2=0.07%, 4=0.07%, 50=0.04% 00:15:04.654 cpu : usr=0.50%, sys=4.10%, ctx=2804, majf=0, minf=10 00:15:04.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.654 issued rwts: total=1268,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.654 job2: (groupid=0, jobs=1): err= 0: pid=77579: Mon Jul 15 18:40:39 2024 00:15:04.654 read: IOPS=962, BW=3848KiB/s (3941kB/s)(3852KiB/1001msec) 00:15:04.654 slat (nsec): min=10195, max=83300, avg=15908.70, stdev=6508.34 00:15:04.654 clat (usec): min=248, max=41451, avg=561.21, stdev=1328.95 00:15:04.654 lat (usec): min=264, max=41486, avg=577.12, stdev=1329.78 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 293], 5.00th=[ 318], 10.00th=[ 375], 20.00th=[ 445], 00:15:04.654 | 30.00th=[ 469], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 523], 00:15:04.654 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 668], 00:15:04.654 | 99.00th=[ 930], 99.50th=[ 1631], 99.90th=[41681], 99.95th=[41681], 00:15:04.654 | 99.99th=[41681] 00:15:04.654 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:15:04.654 slat (nsec): min=15030, max=68238, avg=30044.39, stdev=8718.08 00:15:04.654 clat (usec): min=182, max=605, avg=400.44, stdev=86.77 00:15:04.654 lat (usec): min=208, max=632, avg=430.48, stdev=92.58 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 212], 5.00th=[ 253], 10.00th=[ 277], 20.00th=[ 310], 00:15:04.654 | 30.00th=[ 343], 40.00th=[ 379], 50.00th=[ 420], 60.00th=[ 445], 00:15:04.654 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[ 502], 95.00th=[ 515], 00:15:04.654 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 586], 99.95th=[ 603], 00:15:04.654 | 99.99th=[ 603] 00:15:04.654 bw ( KiB/s): min= 4096, max= 4096, per=17.31%, avg=4096.00, stdev= 0.00, samples=1 00:15:04.654 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:04.654 lat (usec) : 250=2.47%, 500=66.63%, 750=30.15%, 1000=0.30% 00:15:04.654 lat (msec) : 2=0.30%, 4=0.10%, 50=0.05% 00:15:04.654 cpu : usr=0.90%, sys=3.80%, ctx=1987, majf=0, minf=9 00:15:04.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.654 issued rwts: total=963,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.654 job3: (groupid=0, jobs=1): err= 0: pid=77580: Mon Jul 15 18:40:39 2024 00:15:04.654 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:04.654 slat (nsec): min=11338, max=92521, avg=16096.75, stdev=5613.38 00:15:04.654 clat (usec): min=230, max=793, avg=471.51, stdev=124.71 00:15:04.654 lat (usec): min=250, max=808, avg=487.61, stdev=124.76 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 322], 00:15:04.654 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 486], 60.00th=[ 502], 00:15:04.654 | 70.00th=[ 529], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 668], 00:15:04.654 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 766], 99.95th=[ 791], 00:15:04.654 | 99.99th=[ 791] 00:15:04.654 write: IOPS=1197, BW=4791KiB/s (4906kB/s)(4796KiB/1001msec); 0 zone resets 00:15:04.654 slat (usec): min=12, max=137, avg=30.54, stdev=13.12 00:15:04.654 clat (usec): min=154, max=2677, avg=384.01, stdev=113.33 00:15:04.654 lat (usec): min=172, max=2708, avg=414.55, stdev=120.19 00:15:04.654 clat percentiles (usec): 00:15:04.654 | 1.00th=[ 221], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 289], 00:15:04.654 | 30.00th=[ 322], 40.00th=[ 351], 50.00th=[ 392], 60.00th=[ 420], 00:15:04.654 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 494], 95.00th=[ 510], 00:15:04.654 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 922], 99.95th=[ 2671], 00:15:04.654 | 99.99th=[ 2671] 00:15:04.654 bw ( KiB/s): min= 4328, max= 4328, per=18.29%, avg=4328.00, stdev= 0.00, samples=1 00:15:04.655 iops : min= 1082, max= 1082, avg=1082.00, stdev= 0.00, samples=1 00:15:04.655 lat (usec) : 250=4.81%, 500=72.33%, 750=22.49%, 1000=0.31% 00:15:04.655 lat (msec) : 4=0.04% 00:15:04.655 cpu : usr=1.10%, sys=4.20%, ctx=2231, majf=0, minf=17 00:15:04.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.655 issued rwts: total=1024,1199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.655 00:15:04.655 Run status group 0 (all jobs): 00:15:04.655 READ: bw=20.7MiB/s (21.7MB/s), 3848KiB/s-8184KiB/s (3941kB/s-8380kB/s), io=20.7MiB (21.7MB), run=1001-1001msec 00:15:04.655 WRITE: bw=23.1MiB/s (24.2MB/s), 4092KiB/s-8647KiB/s (4190kB/s-8855kB/s), io=23.1MiB (24.3MB), run=1001-1001msec 00:15:04.655 00:15:04.655 Disk stats (read/write): 00:15:04.655 nvme0n1: ios=1752/2048, merge=0/0, ticks=470/370, in_queue=840, util=87.46% 00:15:04.655 nvme0n2: ios=1073/1253, merge=0/0, ticks=444/386, in_queue=830, util=88.37% 00:15:04.655 nvme0n3: ios=691/1024, merge=0/0, ticks=394/420, in_queue=814, util=89.06% 00:15:04.655 nvme0n4: ios=856/1024, merge=0/0, ticks=398/409, in_queue=807, util=89.73% 00:15:04.913 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:04.913 [global] 00:15:04.913 thread=1 00:15:04.913 invalidate=1 00:15:04.913 rw=write 00:15:04.913 time_based=1 00:15:04.913 runtime=1 00:15:04.913 ioengine=libaio 00:15:04.913 direct=1 00:15:04.913 bs=4096 00:15:04.913 iodepth=128 00:15:04.913 norandommap=0 00:15:04.913 numjobs=1 00:15:04.913 00:15:04.913 verify_dump=1 00:15:04.913 verify_backlog=512 00:15:04.913 verify_state_save=0 00:15:04.913 do_verify=1 00:15:04.913 verify=crc32c-intel 00:15:04.913 [job0] 00:15:04.913 filename=/dev/nvme0n1 00:15:04.913 [job1] 00:15:04.913 filename=/dev/nvme0n2 00:15:04.913 [job2] 00:15:04.913 filename=/dev/nvme0n3 00:15:04.913 [job3] 00:15:04.913 filename=/dev/nvme0n4 00:15:04.913 Could not set queue depth (nvme0n1) 00:15:04.913 Could not set queue depth (nvme0n2) 00:15:04.913 Could not set queue depth (nvme0n3) 00:15:04.913 Could not set queue depth (nvme0n4) 00:15:04.913 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.913 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.913 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.913 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.913 fio-3.35 00:15:04.913 Starting 4 threads 00:15:06.289 00:15:06.289 job0: (groupid=0, jobs=1): err= 0: pid=77644: Mon Jul 15 18:40:40 2024 00:15:06.289 read: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1002msec) 00:15:06.289 slat (usec): min=6, max=5606, avg=161.92, stdev=757.43 00:15:06.289 clat (usec): min=1243, max=27911, avg=21050.43, stdev=2509.81 00:15:06.289 lat (usec): min=1262, max=28814, avg=21212.35, stdev=2427.13 00:15:06.289 clat percentiles (usec): 00:15:06.289 | 1.00th=[ 6915], 5.00th=[17433], 10.00th=[18744], 20.00th=[20317], 00:15:06.289 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21365], 60.00th=[21890], 00:15:06.289 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23200], 95.00th=[23462], 00:15:06.289 | 99.00th=[24511], 99.50th=[24511], 99.90th=[26870], 99.95th=[27919], 00:15:06.289 | 99.99th=[27919] 00:15:06.289 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:15:06.289 slat (usec): min=11, max=5217, avg=156.52, stdev=717.31 00:15:06.289 clat (usec): min=14153, max=25905, avg=20429.08, stdev=1627.95 00:15:06.289 lat (usec): min=15237, max=25922, avg=20585.60, stdev=1508.17 00:15:06.289 clat percentiles (usec): 00:15:06.289 | 1.00th=[15795], 5.00th=[17433], 10.00th=[18220], 20.00th=[19268], 00:15:06.289 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:15:06.289 | 70.00th=[21365], 80.00th=[21627], 90.00th=[22152], 95.00th=[22938], 00:15:06.289 | 99.00th=[23725], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:15:06.289 | 99.99th=[25822] 00:15:06.289 bw ( KiB/s): min=12288, max=12312, per=26.46%, avg=12300.00, stdev=16.97, samples=2 00:15:06.289 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:15:06.289 lat (msec) : 2=0.12%, 10=0.53%, 20=23.65%, 50=75.71% 00:15:06.289 cpu : usr=3.40%, sys=10.09%, ctx=246, majf=0, minf=15 00:15:06.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:06.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.289 issued rwts: total=3012,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.289 job1: (groupid=0, jobs=1): err= 0: pid=77645: Mon Jul 15 18:40:40 2024 00:15:06.289 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:15:06.289 slat (usec): min=4, max=15150, avg=235.72, stdev=1213.69 00:15:06.289 clat (usec): min=11203, max=59412, avg=29916.97, stdev=15249.34 00:15:06.289 lat (usec): min=12752, max=59431, avg=30152.69, stdev=15327.52 00:15:06.289 clat percentiles (usec): 00:15:06.289 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13960], 20.00th=[15139], 00:15:06.289 | 30.00th=[15664], 40.00th=[16581], 50.00th=[30802], 60.00th=[36439], 00:15:06.289 | 70.00th=[39584], 80.00th=[44827], 90.00th=[52691], 95.00th=[55313], 00:15:06.289 | 99.00th=[58459], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:15:06.289 | 99.99th=[59507] 00:15:06.289 write: IOPS=2433, BW=9733KiB/s (9966kB/s)(9752KiB/1002msec); 0 zone resets 00:15:06.289 slat (usec): min=10, max=10537, avg=204.35, stdev=1065.24 00:15:06.289 clat (usec): min=448, max=46572, avg=26607.03, stdev=10921.01 00:15:06.289 lat (usec): min=3111, max=46624, avg=26811.38, stdev=10944.15 00:15:06.289 clat percentiles (usec): 00:15:06.289 | 1.00th=[ 3523], 5.00th=[12911], 10.00th=[13173], 20.00th=[14615], 00:15:06.289 | 30.00th=[15795], 40.00th=[25822], 50.00th=[28443], 60.00th=[31589], 00:15:06.289 | 70.00th=[34866], 80.00th=[37487], 90.00th=[40109], 95.00th=[41157], 00:15:06.289 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:15:06.289 | 99.99th=[46400] 00:15:06.289 bw ( KiB/s): min= 8192, max=10296, per=19.89%, avg=9244.00, stdev=1487.75, samples=2 00:15:06.289 iops : min= 2048, max= 2574, avg=2311.00, stdev=371.94, samples=2 00:15:06.289 lat (usec) : 500=0.02% 00:15:06.289 lat (msec) : 4=0.71%, 10=0.71%, 20=40.10%, 50=51.78%, 100=6.67% 00:15:06.289 cpu : usr=2.10%, sys=6.89%, ctx=231, majf=0, minf=11 00:15:06.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:06.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.289 issued rwts: total=2048,2438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.289 job2: (groupid=0, jobs=1): err= 0: pid=77646: Mon Jul 15 18:40:40 2024 00:15:06.289 read: IOPS=3258, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1003msec) 00:15:06.289 slat (usec): min=5, max=7565, avg=144.05, stdev=621.47 00:15:06.289 clat (usec): min=705, max=39887, avg=18231.84, stdev=4823.56 00:15:06.289 lat (usec): min=2558, max=40652, avg=18375.89, stdev=4836.14 00:15:06.289 clat percentiles (usec): 00:15:06.289 | 1.00th=[11338], 5.00th=[14484], 10.00th=[15139], 20.00th=[15795], 00:15:06.289 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16712], 60.00th=[17171], 00:15:06.289 | 70.00th=[17695], 80.00th=[18744], 90.00th=[25297], 95.00th=[30278], 00:15:06.289 | 99.00th=[38011], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:15:06.289 | 99.99th=[40109] 00:15:06.289 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:15:06.289 slat (usec): min=11, max=5519, avg=140.43, stdev=580.67 00:15:06.289 clat (usec): min=11524, max=38227, avg=18677.52, stdev=5383.90 00:15:06.289 lat (usec): min=12618, max=39035, avg=18817.95, stdev=5401.69 00:15:06.289 clat percentiles (usec): 00:15:06.289 | 1.00th=[13042], 5.00th=[13698], 10.00th=[14353], 20.00th=[15664], 00:15:06.289 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:15:06.289 | 70.00th=[18220], 80.00th=[20055], 90.00th=[29754], 95.00th=[31589], 00:15:06.290 | 99.00th=[36439], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:15:06.290 | 99.99th=[38011] 00:15:06.290 bw ( KiB/s): min=12288, max=16416, per=30.88%, avg=14352.00, stdev=2918.94, samples=2 00:15:06.290 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:15:06.290 lat (usec) : 750=0.01% 00:15:06.290 lat (msec) : 4=0.03%, 10=0.10%, 20=81.90%, 50=17.95% 00:15:06.290 cpu : usr=2.79%, sys=9.98%, ctx=514, majf=0, minf=9 00:15:06.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:06.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.290 issued rwts: total=3268,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.290 job3: (groupid=0, jobs=1): err= 0: pid=77647: Mon Jul 15 18:40:40 2024 00:15:06.290 read: IOPS=2074, BW=8299KiB/s (8498kB/s)(8324KiB/1003msec) 00:15:06.290 slat (usec): min=6, max=11697, avg=223.95, stdev=1067.90 00:15:06.290 clat (usec): min=484, max=47032, avg=28835.04, stdev=5141.23 00:15:06.290 lat (usec): min=2443, max=47071, avg=29058.99, stdev=5177.35 00:15:06.290 clat percentiles (usec): 00:15:06.290 | 1.00th=[ 9896], 5.00th=[22676], 10.00th=[23462], 20.00th=[24511], 00:15:06.290 | 30.00th=[26346], 40.00th=[27395], 50.00th=[28967], 60.00th=[30016], 00:15:06.290 | 70.00th=[31327], 80.00th=[32900], 90.00th=[35390], 95.00th=[36963], 00:15:06.290 | 99.00th=[39060], 99.50th=[41681], 99.90th=[41681], 99.95th=[43254], 00:15:06.290 | 99.99th=[46924] 00:15:06.290 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:15:06.290 slat (usec): min=8, max=11855, avg=200.39, stdev=1025.17 00:15:06.290 clat (usec): min=10659, max=42404, avg=25836.78, stdev=5042.58 00:15:06.290 lat (usec): min=11282, max=42447, avg=26037.17, stdev=5128.95 00:15:06.290 clat percentiles (usec): 00:15:06.290 | 1.00th=[17695], 5.00th=[18482], 10.00th=[19006], 20.00th=[20055], 00:15:06.290 | 30.00th=[22152], 40.00th=[24773], 50.00th=[26084], 60.00th=[27395], 00:15:06.290 | 70.00th=[29492], 80.00th=[30540], 90.00th=[31589], 95.00th=[33424], 00:15:06.290 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40109], 99.95th=[41681], 00:15:06.290 | 99.99th=[42206] 00:15:06.290 bw ( KiB/s): min= 9832, max= 9907, per=21.23%, avg=9869.50, stdev=53.03, samples=2 00:15:06.290 iops : min= 2458, max= 2476, avg=2467.00, stdev=12.73, samples=2 00:15:06.290 lat (usec) : 500=0.02% 00:15:06.290 lat (msec) : 4=0.09%, 10=0.34%, 20=11.01%, 50=88.54% 00:15:06.290 cpu : usr=2.59%, sys=7.78%, ctx=323, majf=0, minf=15 00:15:06.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:06.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.290 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.290 00:15:06.290 Run status group 0 (all jobs): 00:15:06.290 READ: bw=40.5MiB/s (42.5MB/s), 8176KiB/s-12.7MiB/s (8372kB/s-13.3MB/s), io=40.7MiB (42.6MB), run=1002-1003msec 00:15:06.290 WRITE: bw=45.4MiB/s (47.6MB/s), 9733KiB/s-14.0MiB/s (9966kB/s-14.6MB/s), io=45.5MiB (47.7MB), run=1002-1003msec 00:15:06.290 00:15:06.290 Disk stats (read/write): 00:15:06.290 nvme0n1: ios=2610/2604, merge=0/0, ticks=12729/11709, in_queue=24438, util=87.16% 00:15:06.290 nvme0n2: ios=1573/1623, merge=0/0, ticks=13553/12100, in_queue=25653, util=87.88% 00:15:06.290 nvme0n3: ios=3072/3171, merge=0/0, ticks=12905/12005, in_queue=24910, util=88.82% 00:15:06.290 nvme0n4: ios=1949/2048, merge=0/0, ticks=17439/14616, in_queue=32055, util=88.84% 00:15:06.290 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:06.290 [global] 00:15:06.290 thread=1 00:15:06.290 invalidate=1 00:15:06.290 rw=randwrite 00:15:06.290 time_based=1 00:15:06.290 runtime=1 00:15:06.290 ioengine=libaio 00:15:06.290 direct=1 00:15:06.290 bs=4096 00:15:06.290 iodepth=128 00:15:06.290 norandommap=0 00:15:06.290 numjobs=1 00:15:06.290 00:15:06.290 verify_dump=1 00:15:06.290 verify_backlog=512 00:15:06.290 verify_state_save=0 00:15:06.290 do_verify=1 00:15:06.290 verify=crc32c-intel 00:15:06.290 [job0] 00:15:06.290 filename=/dev/nvme0n1 00:15:06.290 [job1] 00:15:06.290 filename=/dev/nvme0n2 00:15:06.290 [job2] 00:15:06.290 filename=/dev/nvme0n3 00:15:06.290 [job3] 00:15:06.290 filename=/dev/nvme0n4 00:15:06.290 Could not set queue depth (nvme0n1) 00:15:06.290 Could not set queue depth (nvme0n2) 00:15:06.290 Could not set queue depth (nvme0n3) 00:15:06.290 Could not set queue depth (nvme0n4) 00:15:06.290 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.290 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.290 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.290 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.290 fio-3.35 00:15:06.290 Starting 4 threads 00:15:07.664 00:15:07.664 job0: (groupid=0, jobs=1): err= 0: pid=77700: Mon Jul 15 18:40:41 2024 00:15:07.664 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:15:07.664 slat (usec): min=4, max=10300, avg=176.55, stdev=832.80 00:15:07.664 clat (usec): min=12296, max=56103, avg=23025.23, stdev=12654.90 00:15:07.664 lat (usec): min=13106, max=56126, avg=23201.79, stdev=12730.40 00:15:07.664 clat percentiles (usec): 00:15:07.664 | 1.00th=[13304], 5.00th=[15139], 10.00th=[15664], 20.00th=[16188], 00:15:07.664 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:15:07.664 | 70.00th=[17957], 80.00th=[21103], 90.00th=[47449], 95.00th=[49021], 00:15:07.664 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:15:07.664 | 99.99th=[56361] 00:15:07.664 write: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1006msec); 0 zone resets 00:15:07.664 slat (usec): min=8, max=11585, avg=175.18, stdev=944.61 00:15:07.664 clat (usec): min=3045, max=50646, avg=22599.27, stdev=11687.64 00:15:07.664 lat (usec): min=7569, max=52717, avg=22774.45, stdev=11751.02 00:15:07.664 clat percentiles (usec): 00:15:07.664 | 1.00th=[ 8848], 5.00th=[13173], 10.00th=[13566], 20.00th=[14877], 00:15:07.664 | 30.00th=[16057], 40.00th=[16581], 50.00th=[16909], 60.00th=[17433], 00:15:07.664 | 70.00th=[19006], 80.00th=[41157], 90.00th=[43254], 95.00th=[45351], 00:15:07.664 | 99.00th=[48497], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:15:07.664 | 99.99th=[50594] 00:15:07.664 bw ( KiB/s): min= 6600, max=16416, per=21.94%, avg=11508.00, stdev=6940.96, samples=2 00:15:07.664 iops : min= 1650, max= 4104, avg=2877.00, stdev=1735.24, samples=2 00:15:07.664 lat (msec) : 4=0.02%, 10=0.58%, 20=74.50%, 50=22.68%, 100=2.23% 00:15:07.664 cpu : usr=2.39%, sys=8.56%, ctx=335, majf=0, minf=7 00:15:07.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:07.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.664 issued rwts: total=2560,3001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.664 job1: (groupid=0, jobs=1): err= 0: pid=77701: Mon Jul 15 18:40:41 2024 00:15:07.664 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:15:07.664 slat (usec): min=8, max=3939, avg=117.43, stdev=533.64 00:15:07.664 clat (usec): min=10395, max=20065, avg=15288.52, stdev=1483.78 00:15:07.664 lat (usec): min=11215, max=20081, avg=15405.95, stdev=1423.21 00:15:07.664 clat percentiles (usec): 00:15:07.664 | 1.00th=[11600], 5.00th=[12911], 10.00th=[13566], 20.00th=[14091], 00:15:07.664 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15270], 60.00th=[15926], 00:15:07.664 | 70.00th=[16188], 80.00th=[16581], 90.00th=[16909], 95.00th=[17171], 00:15:07.664 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:15:07.664 | 99.99th=[20055] 00:15:07.664 write: IOPS=4267, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1001msec); 0 zone resets 00:15:07.664 slat (usec): min=12, max=3764, avg=112.57, stdev=370.40 00:15:07.664 clat (usec): min=506, max=18649, avg=14929.27, stdev=1906.14 00:15:07.664 lat (usec): min=3274, max=18675, avg=15041.84, stdev=1894.94 00:15:07.664 clat percentiles (usec): 00:15:07.664 | 1.00th=[ 7439], 5.00th=[11863], 10.00th=[12780], 20.00th=[13698], 00:15:07.664 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15401], 60.00th=[15664], 00:15:07.664 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:15:07.664 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:15:07.664 | 99.99th=[18744] 00:15:07.664 bw ( KiB/s): min=16384, max=16384, per=31.24%, avg=16384.00, stdev= 0.00, samples=1 00:15:07.664 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:07.664 lat (usec) : 750=0.01% 00:15:07.664 lat (msec) : 4=0.27%, 10=0.49%, 20=99.04%, 50=0.18% 00:15:07.664 cpu : usr=4.30%, sys=12.10%, ctx=584, majf=0, minf=12 00:15:07.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:07.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.664 issued rwts: total=4096,4272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.664 job2: (groupid=0, jobs=1): err= 0: pid=77702: Mon Jul 15 18:40:41 2024 00:15:07.664 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:15:07.664 slat (usec): min=6, max=4969, avg=132.41, stdev=623.50 00:15:07.664 clat (usec): min=12067, max=22499, avg=17155.69, stdev=1404.90 00:15:07.664 lat (usec): min=12323, max=23977, avg=17288.10, stdev=1303.94 00:15:07.664 clat percentiles (usec): 00:15:07.664 | 1.00th=[13042], 5.00th=[15139], 10.00th=[15533], 20.00th=[16057], 00:15:07.664 | 30.00th=[16319], 40.00th=[16712], 50.00th=[17171], 60.00th=[17433], 00:15:07.664 | 70.00th=[17957], 80.00th=[18482], 90.00th=[18744], 95.00th=[19006], 00:15:07.664 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22414], 99.95th=[22414], 00:15:07.664 | 99.99th=[22414] 00:15:07.664 write: IOPS=3874, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1002msec); 0 zone resets 00:15:07.664 slat (usec): min=12, max=4501, avg=126.17, stdev=510.39 00:15:07.664 clat (usec): min=1945, max=21995, avg=16720.61, stdev=2513.88 00:15:07.664 lat (usec): min=1965, max=22023, avg=16846.78, stdev=2504.63 00:15:07.664 clat percentiles (usec): 00:15:07.664 | 1.00th=[ 6128], 5.00th=[13173], 10.00th=[13698], 20.00th=[14877], 00:15:07.664 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17171], 60.00th=[17433], 00:15:07.664 | 70.00th=[17957], 80.00th=[18744], 90.00th=[19268], 95.00th=[20055], 00:15:07.664 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:15:07.664 | 99.99th=[21890] 00:15:07.664 bw ( KiB/s): min=14808, max=15262, per=28.67%, avg=15035.00, stdev=321.03, samples=2 00:15:07.664 iops : min= 3702, max= 3815, avg=3758.50, stdev=79.90, samples=2 00:15:07.664 lat (msec) : 2=0.03%, 4=0.25%, 10=0.43%, 20=95.77%, 50=3.52% 00:15:07.664 cpu : usr=3.40%, sys=12.19%, ctx=436, majf=0, minf=9 00:15:07.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:07.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.664 issued rwts: total=3584,3882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.664 job3: (groupid=0, jobs=1): err= 0: pid=77703: Mon Jul 15 18:40:41 2024 00:15:07.665 read: IOPS=1596, BW=6387KiB/s (6541kB/s)(6432KiB/1007msec) 00:15:07.665 slat (usec): min=8, max=16111, avg=291.39, stdev=1335.98 00:15:07.665 clat (usec): min=80, max=58434, avg=34831.71, stdev=10478.88 00:15:07.665 lat (usec): min=7445, max=58452, avg=35123.09, stdev=10508.82 00:15:07.665 clat percentiles (usec): 00:15:07.665 | 1.00th=[ 7767], 5.00th=[23725], 10.00th=[26084], 20.00th=[27395], 00:15:07.665 | 30.00th=[28967], 40.00th=[29754], 50.00th=[30540], 60.00th=[32900], 00:15:07.665 | 70.00th=[40109], 80.00th=[45876], 90.00th=[49546], 95.00th=[54264], 00:15:07.665 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:15:07.665 | 99.99th=[58459] 00:15:07.665 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:15:07.665 slat (usec): min=15, max=11892, avg=253.47, stdev=1181.44 00:15:07.665 clat (usec): min=21231, max=53843, avg=34355.52, stdev=7302.48 00:15:07.665 lat (usec): min=21611, max=53872, avg=34608.99, stdev=7262.77 00:15:07.665 clat percentiles (usec): 00:15:07.665 | 1.00th=[23462], 5.00th=[27657], 10.00th=[27919], 20.00th=[28705], 00:15:07.665 | 30.00th=[28967], 40.00th=[29492], 50.00th=[30016], 60.00th=[32637], 00:15:07.665 | 70.00th=[41157], 80.00th=[42730], 90.00th=[44827], 95.00th=[47973], 00:15:07.665 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:15:07.665 | 99.99th=[53740] 00:15:07.665 bw ( KiB/s): min= 7688, max= 8264, per=15.21%, avg=7976.00, stdev=407.29, samples=2 00:15:07.665 iops : min= 1922, max= 2066, avg=1994.00, stdev=101.82, samples=2 00:15:07.665 lat (usec) : 100=0.03% 00:15:07.665 lat (msec) : 10=0.88%, 20=0.19%, 50=94.09%, 100=4.81% 00:15:07.665 cpu : usr=1.29%, sys=7.36%, ctx=321, majf=0, minf=13 00:15:07.665 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:15:07.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.665 issued rwts: total=1608,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.665 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.665 00:15:07.665 Run status group 0 (all jobs): 00:15:07.665 READ: bw=46.0MiB/s (48.2MB/s), 6387KiB/s-16.0MiB/s (6541kB/s-16.8MB/s), io=46.3MiB (48.5MB), run=1001-1007msec 00:15:07.665 WRITE: bw=51.2MiB/s (53.7MB/s), 8135KiB/s-16.7MiB/s (8330kB/s-17.5MB/s), io=51.6MiB (54.1MB), run=1001-1007msec 00:15:07.665 00:15:07.665 Disk stats (read/write): 00:15:07.665 nvme0n1: ios=2610/2577, merge=0/0, ticks=14132/11305, in_queue=25437, util=88.26% 00:15:07.665 nvme0n2: ios=3633/3627, merge=0/0, ticks=12963/12625, in_queue=25588, util=89.69% 00:15:07.665 nvme0n3: ios=3093/3341, merge=0/0, ticks=12598/12638, in_queue=25236, util=89.62% 00:15:07.665 nvme0n4: ios=1553/1703, merge=0/0, ticks=13632/12506, in_queue=26138, util=89.77% 00:15:07.665 18:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:07.665 18:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77716 00:15:07.665 18:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:07.665 18:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:07.665 [global] 00:15:07.665 thread=1 00:15:07.665 invalidate=1 00:15:07.665 rw=read 00:15:07.665 time_based=1 00:15:07.665 runtime=10 00:15:07.665 ioengine=libaio 00:15:07.665 direct=1 00:15:07.665 bs=4096 00:15:07.665 iodepth=1 00:15:07.665 norandommap=1 00:15:07.665 numjobs=1 00:15:07.665 00:15:07.665 [job0] 00:15:07.665 filename=/dev/nvme0n1 00:15:07.665 [job1] 00:15:07.665 filename=/dev/nvme0n2 00:15:07.665 [job2] 00:15:07.665 filename=/dev/nvme0n3 00:15:07.665 [job3] 00:15:07.665 filename=/dev/nvme0n4 00:15:07.665 Could not set queue depth (nvme0n1) 00:15:07.665 Could not set queue depth (nvme0n2) 00:15:07.665 Could not set queue depth (nvme0n3) 00:15:07.665 Could not set queue depth (nvme0n4) 00:15:07.923 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.923 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.923 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.923 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.923 fio-3.35 00:15:07.923 Starting 4 threads 00:15:11.203 18:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:11.203 fio: pid=77765, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:11.203 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=50618368, buflen=4096 00:15:11.203 18:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:11.203 fio: pid=77764, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:11.203 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26669056, buflen=4096 00:15:11.203 18:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:11.203 18:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:11.460 fio: pid=77762, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:11.460 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=62140416, buflen=4096 00:15:11.460 18:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:11.460 18:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:11.718 fio: pid=77763, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:11.718 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=33374208, buflen=4096 00:15:12.009 00:15:12.009 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77762: Mon Jul 15 18:40:46 2024 00:15:12.009 read: IOPS=4324, BW=16.9MiB/s (17.7MB/s)(59.3MiB/3508msec) 00:15:12.009 slat (usec): min=8, max=14099, avg=16.42, stdev=191.28 00:15:12.009 clat (usec): min=109, max=2466, avg=213.85, stdev=51.05 00:15:12.009 lat (usec): min=119, max=14377, avg=230.27, stdev=198.81 00:15:12.010 clat percentiles (usec): 00:15:12.010 | 1.00th=[ 135], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 190], 00:15:12.010 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 221], 00:15:12.010 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 260], 00:15:12.010 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 562], 99.95th=[ 947], 00:15:12.010 | 99.99th=[ 2245] 00:15:12.010 bw ( KiB/s): min=16576, max=17248, per=38.79%, avg=16944.00, stdev=215.38, samples=6 00:15:12.010 iops : min= 4144, max= 4312, avg=4236.00, stdev=53.84, samples=6 00:15:12.010 lat (usec) : 250=91.10%, 500=8.78%, 750=0.05%, 1000=0.02% 00:15:12.010 lat (msec) : 2=0.02%, 4=0.03% 00:15:12.010 cpu : usr=1.06%, sys=4.68%, ctx=15176, majf=0, minf=1 00:15:12.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 issued rwts: total=15172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.010 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77763: Mon Jul 15 18:40:46 2024 00:15:12.010 read: IOPS=2109, BW=8437KiB/s (8639kB/s)(31.8MiB/3863msec) 00:15:12.010 slat (usec): min=8, max=12834, avg=31.09, stdev=302.52 00:15:12.010 clat (usec): min=105, max=218013, avg=441.02, stdev=2414.17 00:15:12.010 lat (usec): min=114, max=218031, avg=472.11, stdev=2432.74 00:15:12.010 clat percentiles (usec): 00:15:12.010 | 1.00th=[ 119], 5.00th=[ 143], 10.00th=[ 219], 20.00th=[ 273], 00:15:12.010 | 30.00th=[ 433], 40.00th=[ 445], 50.00th=[ 457], 60.00th=[ 465], 00:15:12.010 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 529], 00:15:12.010 | 99.00th=[ 586], 99.50th=[ 660], 99.90th=[ 1074], 99.95th=[ 1500], 00:15:12.010 | 99.99th=[217056] 00:15:12.010 bw ( KiB/s): min= 7824, max=11061, per=19.42%, avg=8485.14, stdev=1147.80, samples=7 00:15:12.010 iops : min= 1956, max= 2765, avg=2121.14, stdev=286.91, samples=7 00:15:12.010 lat (usec) : 250=15.78%, 500=69.78%, 750=14.19%, 1000=0.10% 00:15:12.010 lat (msec) : 2=0.12%, 10=0.01%, 250=0.01% 00:15:12.010 cpu : usr=0.93%, sys=4.04%, ctx=8158, majf=0, minf=1 00:15:12.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 issued rwts: total=8149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.010 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77764: Mon Jul 15 18:40:46 2024 00:15:12.010 read: IOPS=2012, BW=8051KiB/s (8244kB/s)(25.4MiB/3235msec) 00:15:12.010 slat (usec): min=13, max=9260, avg=27.83, stdev=148.73 00:15:12.010 clat (usec): min=166, max=4050, avg=466.09, stdev=100.76 00:15:12.010 lat (usec): min=182, max=9582, avg=493.92, stdev=178.36 00:15:12.010 clat percentiles (usec): 00:15:12.010 | 1.00th=[ 253], 5.00th=[ 408], 10.00th=[ 424], 20.00th=[ 437], 00:15:12.010 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 474], 00:15:12.010 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 537], 00:15:12.010 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 1483], 99.95th=[ 3163], 00:15:12.010 | 99.99th=[ 4047] 00:15:12.010 bw ( KiB/s): min= 7808, max= 8288, per=18.45%, avg=8058.50, stdev=182.99, samples=6 00:15:12.010 iops : min= 1952, max= 2072, avg=2014.50, stdev=45.77, samples=6 00:15:12.010 lat (usec) : 250=0.84%, 500=81.79%, 750=17.12%, 1000=0.06% 00:15:12.010 lat (msec) : 2=0.09%, 4=0.06%, 10=0.02% 00:15:12.010 cpu : usr=0.96%, sys=4.27%, ctx=6520, majf=0, minf=1 00:15:12.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.010 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77765: Mon Jul 15 18:40:46 2024 00:15:12.010 read: IOPS=4225, BW=16.5MiB/s (17.3MB/s)(48.3MiB/2925msec) 00:15:12.010 slat (usec): min=10, max=373, avg=12.66, stdev= 4.54 00:15:12.010 clat (usec): min=81, max=2634, avg=223.04, stdev=41.88 00:15:12.010 lat (usec): min=152, max=2650, avg=235.70, stdev=42.19 00:15:12.010 clat percentiles (usec): 00:15:12.010 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 204], 00:15:12.010 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:15:12.010 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 262], 00:15:12.010 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 445], 99.95th=[ 742], 00:15:12.010 | 99.99th=[ 1876] 00:15:12.010 bw ( KiB/s): min=16392, max=17296, per=38.66%, avg=16889.80, stdev=324.22, samples=5 00:15:12.010 iops : min= 4098, max= 4324, avg=4222.40, stdev=81.06, samples=5 00:15:12.010 lat (usec) : 100=0.01%, 250=88.80%, 500=11.10%, 750=0.03%, 1000=0.01% 00:15:12.010 lat (msec) : 2=0.03%, 4=0.01% 00:15:12.010 cpu : usr=0.68%, sys=4.41%, ctx=12359, majf=0, minf=2 00:15:12.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.010 issued rwts: total=12359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.010 00:15:12.010 Run status group 0 (all jobs): 00:15:12.010 READ: bw=42.7MiB/s (44.7MB/s), 8051KiB/s-16.9MiB/s (8244kB/s-17.7MB/s), io=165MiB (173MB), run=2925-3863msec 00:15:12.010 00:15:12.010 Disk stats (read/write): 00:15:12.010 nvme0n1: ios=14374/0, merge=0/0, ticks=3179/0, in_queue=3179, util=94.91% 00:15:12.010 nvme0n2: ios=8137/0, merge=0/0, ticks=3626/0, in_queue=3626, util=95.28% 00:15:12.010 nvme0n3: ios=6264/0, merge=0/0, ticks=2964/0, in_queue=2964, util=96.32% 00:15:12.010 nvme0n4: ios=12082/0, merge=0/0, ticks=2753/0, in_queue=2753, util=96.82% 00:15:12.010 18:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.010 18:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:12.010 18:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.010 18:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:12.575 18:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.575 18:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:12.832 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.832 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:13.090 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.090 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77716 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.349 nvmf hotplug test: fio failed as expected 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:13.349 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.607 rmmod nvme_tcp 00:15:13.607 rmmod nvme_fabrics 00:15:13.607 rmmod nvme_keyring 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.607 18:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 77227 ']' 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 77227 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 77227 ']' 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 77227 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77227 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77227' 00:15:13.607 killing process with pid 77227 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 77227 00:15:13.607 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 77227 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:13.866 ************************************ 00:15:13.866 END TEST nvmf_fio_target 00:15:13.866 ************************************ 00:15:13.866 00:15:13.866 real 0m19.979s 00:15:13.866 user 1m17.004s 00:15:13.866 sys 0m8.500s 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.866 18:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.866 18:40:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:13.866 18:40:48 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:13.866 18:40:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.866 18:40:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.866 18:40:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.170 ************************************ 00:15:14.170 START TEST nvmf_bdevio 00:15:14.170 ************************************ 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:14.170 * Looking for test storage... 00:15:14.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.170 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:14.171 Cannot find device "nvmf_tgt_br" 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.171 Cannot find device "nvmf_tgt_br2" 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:14.171 Cannot find device "nvmf_tgt_br" 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:14.171 Cannot find device "nvmf_tgt_br2" 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:14.171 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:14.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:15:14.431 00:15:14.431 --- 10.0.0.2 ping statistics --- 00:15:14.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.431 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:14.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:14.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:14.431 00:15:14.431 --- 10.0.0.3 ping statistics --- 00:15:14.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.431 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:14.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:14.431 00:15:14.431 --- 10.0.0.1 ping statistics --- 00:15:14.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.431 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=78090 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 78090 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 78090 ']' 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.431 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:14.690 [2024-07-15 18:40:48.947345] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:15:14.690 [2024-07-15 18:40:48.948202] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.690 [2024-07-15 18:40:49.094089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.948 [2024-07-15 18:40:49.198980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.948 [2024-07-15 18:40:49.199030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.948 [2024-07-15 18:40:49.199041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.948 [2024-07-15 18:40:49.199049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.948 [2024-07-15 18:40:49.199057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.948 [2024-07-15 18:40:49.199277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:14.948 [2024-07-15 18:40:49.199387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:14.948 [2024-07-15 18:40:49.199915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:14.948 [2024-07-15 18:40:49.199924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:15.515 [2024-07-15 18:40:49.878121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:15.515 Malloc0 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:15.515 [2024-07-15 18:40:49.952246] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:15.515 { 00:15:15.515 "params": { 00:15:15.515 "name": "Nvme$subsystem", 00:15:15.515 "trtype": "$TEST_TRANSPORT", 00:15:15.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.515 "adrfam": "ipv4", 00:15:15.515 "trsvcid": "$NVMF_PORT", 00:15:15.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.515 "hdgst": ${hdgst:-false}, 00:15:15.515 "ddgst": ${ddgst:-false} 00:15:15.515 }, 00:15:15.515 "method": "bdev_nvme_attach_controller" 00:15:15.515 } 00:15:15.515 EOF 00:15:15.515 )") 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:15.515 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:15.515 "params": { 00:15:15.515 "name": "Nvme1", 00:15:15.515 "trtype": "tcp", 00:15:15.515 "traddr": "10.0.0.2", 00:15:15.515 "adrfam": "ipv4", 00:15:15.515 "trsvcid": "4420", 00:15:15.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.515 "hdgst": false, 00:15:15.515 "ddgst": false 00:15:15.515 }, 00:15:15.515 "method": "bdev_nvme_attach_controller" 00:15:15.515 }' 00:15:15.773 [2024-07-15 18:40:50.014619] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:15:15.773 [2024-07-15 18:40:50.014716] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78144 ] 00:15:15.773 [2024-07-15 18:40:50.159187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.031 [2024-07-15 18:40:50.288739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.031 [2024-07-15 18:40:50.288861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.031 [2024-07-15 18:40:50.288867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.031 I/O targets: 00:15:16.031 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:16.031 00:15:16.031 00:15:16.031 CUnit - A unit testing framework for C - Version 2.1-3 00:15:16.031 http://cunit.sourceforge.net/ 00:15:16.031 00:15:16.031 00:15:16.031 Suite: bdevio tests on: Nvme1n1 00:15:16.031 Test: blockdev write read block ...passed 00:15:16.290 Test: blockdev write zeroes read block ...passed 00:15:16.290 Test: blockdev write zeroes read no split ...passed 00:15:16.290 Test: blockdev write zeroes read split ...passed 00:15:16.290 Test: blockdev write zeroes read split partial ...passed 00:15:16.290 Test: blockdev reset ...[2024-07-15 18:40:50.579937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.290 [2024-07-15 18:40:50.580074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ef180 (9): Bad file descriptor 00:15:16.290 passed 00:15:16.290 Test: blockdev write read 8 blocks ...[2024-07-15 18:40:50.591271] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.290 passed 00:15:16.290 Test: blockdev write read size > 128k ...passed 00:15:16.290 Test: blockdev write read invalid size ...passed 00:15:16.290 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:16.290 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:16.290 Test: blockdev write read max offset ...passed 00:15:16.290 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:16.290 Test: blockdev writev readv 8 blocks ...passed 00:15:16.290 Test: blockdev writev readv 30 x 1block ...passed 00:15:16.290 Test: blockdev writev readv block ...passed 00:15:16.290 Test: blockdev writev readv size > 128k ...passed 00:15:16.290 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:16.290 Test: blockdev comparev and writev ...[2024-07-15 18:40:50.764800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.764867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.290 [2024-07-15 18:40:50.764887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.764898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:16.290 [2024-07-15 18:40:50.765469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.765488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:16.290 [2024-07-15 18:40:50.765504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.765515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:16.290 [2024-07-15 18:40:50.765904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.765918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:16.290 [2024-07-15 18:40:50.765934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.765944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:16.290 [2024-07-15 18:40:50.766325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.766339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:16.290 [2024-07-15 18:40:50.766355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:16.290 [2024-07-15 18:40:50.766365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:16.549 passed 00:15:16.549 Test: blockdev nvme passthru rw ...passed 00:15:16.549 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:40:50.848428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.549 [2024-07-15 18:40:50.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:16.549 [2024-07-15 18:40:50.848608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.549 [2024-07-15 18:40:50.848620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:16.549 passed 00:15:16.549 Test: blockdev nvme admin passthru ...[2024-07-15 18:40:50.848764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.549 [2024-07-15 18:40:50.848781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:16.549 [2024-07-15 18:40:50.848915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.549 [2024-07-15 18:40:50.848928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:16.549 passed 00:15:16.549 Test: blockdev copy ...passed 00:15:16.549 00:15:16.549 Run Summary: Type Total Ran Passed Failed Inactive 00:15:16.549 suites 1 1 n/a 0 0 00:15:16.549 tests 23 23 23 0 0 00:15:16.549 asserts 152 152 152 0 n/a 00:15:16.549 00:15:16.549 Elapsed time = 0.872 seconds 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.807 rmmod nvme_tcp 00:15:16.807 rmmod nvme_fabrics 00:15:16.807 rmmod nvme_keyring 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 78090 ']' 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 78090 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 78090 ']' 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 78090 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78090 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78090' 00:15:16.807 killing process with pid 78090 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 78090 00:15:16.807 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 78090 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:17.075 ************************************ 00:15:17.075 END TEST nvmf_bdevio 00:15:17.075 ************************************ 00:15:17.075 00:15:17.075 real 0m3.160s 00:15:17.075 user 0m10.941s 00:15:17.075 sys 0m0.893s 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.075 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:17.334 18:40:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.334 18:40:51 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:17.334 18:40:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.334 18:40:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.334 18:40:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.334 ************************************ 00:15:17.334 START TEST nvmf_auth_target 00:15:17.334 ************************************ 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:17.334 * Looking for test storage... 00:15:17.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:17.334 Cannot find device "nvmf_tgt_br" 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:15:17.334 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.334 Cannot find device "nvmf_tgt_br2" 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:17.335 Cannot find device "nvmf_tgt_br" 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:17.335 Cannot find device "nvmf_tgt_br2" 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:17.335 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:17.594 18:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:17.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:15:17.594 00:15:17.594 --- 10.0.0.2 ping statistics --- 00:15:17.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.594 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:17.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:17.594 00:15:17.594 --- 10.0.0.3 ping statistics --- 00:15:17.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.594 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:15:17.594 00:15:17.594 --- 10.0.0.1 ping statistics --- 00:15:17.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.594 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.594 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=78331 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 78331 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78331 ']' 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.873 18:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.806 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.806 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:18.806 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.806 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.806 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=78375 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=46f72b014a674823fbff0b8f4566bc69ce754b1c0d32a09c 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.WSC 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 46f72b014a674823fbff0b8f4566bc69ce754b1c0d32a09c 0 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 46f72b014a674823fbff0b8f4566bc69ce754b1c0d32a09c 0 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=46f72b014a674823fbff0b8f4566bc69ce754b1c0d32a09c 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:18.807 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.WSC 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.WSC 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.WSC 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff8ba6d95d7901465c9709d09048ee669a705d74f9cf5138fe8ee970b9c3f936 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pYf 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff8ba6d95d7901465c9709d09048ee669a705d74f9cf5138fe8ee970b9c3f936 3 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff8ba6d95d7901465c9709d09048ee669a705d74f9cf5138fe8ee970b9c3f936 3 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff8ba6d95d7901465c9709d09048ee669a705d74f9cf5138fe8ee970b9c3f936 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:19.064 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pYf 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pYf 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.pYf 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4ec1943b0ed95cc2d3cc784d991d0900 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xTX 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4ec1943b0ed95cc2d3cc784d991d0900 1 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4ec1943b0ed95cc2d3cc784d991d0900 1 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4ec1943b0ed95cc2d3cc784d991d0900 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xTX 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xTX 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.xTX 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a547a714367662f923af50e9b8f0f1e4fbd19f6168f42a09 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6C6 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a547a714367662f923af50e9b8f0f1e4fbd19f6168f42a09 2 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a547a714367662f923af50e9b8f0f1e4fbd19f6168f42a09 2 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a547a714367662f923af50e9b8f0f1e4fbd19f6168f42a09 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6C6 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6C6 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.6C6 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9b20330399ed1aec1dd426971ec179f4f85810782c7f705b 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iQM 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9b20330399ed1aec1dd426971ec179f4f85810782c7f705b 2 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9b20330399ed1aec1dd426971ec179f4f85810782c7f705b 2 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9b20330399ed1aec1dd426971ec179f4f85810782c7f705b 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:19.065 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iQM 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iQM 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.iQM 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=314d0f9a73a21f510e4263e068b2b708 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Gw3 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 314d0f9a73a21f510e4263e068b2b708 1 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 314d0f9a73a21f510e4263e068b2b708 1 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=314d0f9a73a21f510e4263e068b2b708 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Gw3 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Gw3 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Gw3 00:15:19.323 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a84724a1d627705ef9884e71f7401b50fa959773a9b1419ea6ee14a1ee353f21 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iP5 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a84724a1d627705ef9884e71f7401b50fa959773a9b1419ea6ee14a1ee353f21 3 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a84724a1d627705ef9884e71f7401b50fa959773a9b1419ea6ee14a1ee353f21 3 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a84724a1d627705ef9884e71f7401b50fa959773a9b1419ea6ee14a1ee353f21 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iP5 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iP5 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.iP5 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 78331 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78331 ']' 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.324 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 78375 /var/tmp/host.sock 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78375 ']' 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.582 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.839 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.839 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:19.839 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:19.839 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.839 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WSC 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.WSC 00:15:20.098 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.WSC 00:15:20.356 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.pYf ]] 00:15:20.356 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pYf 00:15:20.356 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.356 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.356 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.356 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pYf 00:15:20.356 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pYf 00:15:20.614 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:20.614 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xTX 00:15:20.614 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.615 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.615 18:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.615 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xTX 00:15:20.615 18:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xTX 00:15:20.873 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.6C6 ]] 00:15:20.873 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6C6 00:15:20.873 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.873 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.873 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.873 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6C6 00:15:20.873 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6C6 00:15:21.131 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:21.131 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iQM 00:15:21.131 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.131 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.131 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.131 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iQM 00:15:21.131 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iQM 00:15:21.389 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Gw3 ]] 00:15:21.389 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gw3 00:15:21.389 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.389 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.389 18:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.389 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gw3 00:15:21.389 18:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gw3 00:15:21.647 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:21.647 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iP5 00:15:21.647 18:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.647 18:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.647 18:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.647 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.iP5 00:15:21.647 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.iP5 00:15:21.905 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:21.905 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:21.905 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.905 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.905 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.905 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.163 18:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.422 18:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.422 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.422 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.422 00:15:22.681 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.681 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.681 18:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.939 { 00:15:22.939 "auth": { 00:15:22.939 "dhgroup": "null", 00:15:22.939 "digest": "sha256", 00:15:22.939 "state": "completed" 00:15:22.939 }, 00:15:22.939 "cntlid": 1, 00:15:22.939 "listen_address": { 00:15:22.939 "adrfam": "IPv4", 00:15:22.939 "traddr": "10.0.0.2", 00:15:22.939 "trsvcid": "4420", 00:15:22.939 "trtype": "TCP" 00:15:22.939 }, 00:15:22.939 "peer_address": { 00:15:22.939 "adrfam": "IPv4", 00:15:22.939 "traddr": "10.0.0.1", 00:15:22.939 "trsvcid": "36572", 00:15:22.939 "trtype": "TCP" 00:15:22.939 }, 00:15:22.939 "qid": 0, 00:15:22.939 "state": "enabled", 00:15:22.939 "thread": "nvmf_tgt_poll_group_000" 00:15:22.939 } 00:15:22.939 ]' 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.939 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.198 18:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:27.378 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.635 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.211 00:15:28.211 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.212 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.212 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.479 { 00:15:28.479 "auth": { 00:15:28.479 "dhgroup": "null", 00:15:28.479 "digest": "sha256", 00:15:28.479 "state": "completed" 00:15:28.479 }, 00:15:28.479 "cntlid": 3, 00:15:28.479 "listen_address": { 00:15:28.479 "adrfam": "IPv4", 00:15:28.479 "traddr": "10.0.0.2", 00:15:28.479 "trsvcid": "4420", 00:15:28.479 "trtype": "TCP" 00:15:28.479 }, 00:15:28.479 "peer_address": { 00:15:28.479 "adrfam": "IPv4", 00:15:28.479 "traddr": "10.0.0.1", 00:15:28.479 "trsvcid": "50208", 00:15:28.479 "trtype": "TCP" 00:15:28.479 }, 00:15:28.479 "qid": 0, 00:15:28.479 "state": "enabled", 00:15:28.479 "thread": "nvmf_tgt_poll_group_000" 00:15:28.479 } 00:15:28.479 ]' 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.479 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.736 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.305 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.562 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.820 00:15:29.820 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.820 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.820 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.078 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.078 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.078 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.078 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.078 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.336 { 00:15:30.336 "auth": { 00:15:30.336 "dhgroup": "null", 00:15:30.336 "digest": "sha256", 00:15:30.336 "state": "completed" 00:15:30.336 }, 00:15:30.336 "cntlid": 5, 00:15:30.336 "listen_address": { 00:15:30.336 "adrfam": "IPv4", 00:15:30.336 "traddr": "10.0.0.2", 00:15:30.336 "trsvcid": "4420", 00:15:30.336 "trtype": "TCP" 00:15:30.336 }, 00:15:30.336 "peer_address": { 00:15:30.336 "adrfam": "IPv4", 00:15:30.336 "traddr": "10.0.0.1", 00:15:30.336 "trsvcid": "50226", 00:15:30.336 "trtype": "TCP" 00:15:30.336 }, 00:15:30.336 "qid": 0, 00:15:30.336 "state": "enabled", 00:15:30.336 "thread": "nvmf_tgt_poll_group_000" 00:15:30.336 } 00:15:30.336 ]' 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.336 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.594 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.529 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.786 00:15:32.043 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.043 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.043 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.299 { 00:15:32.299 "auth": { 00:15:32.299 "dhgroup": "null", 00:15:32.299 "digest": "sha256", 00:15:32.299 "state": "completed" 00:15:32.299 }, 00:15:32.299 "cntlid": 7, 00:15:32.299 "listen_address": { 00:15:32.299 "adrfam": "IPv4", 00:15:32.299 "traddr": "10.0.0.2", 00:15:32.299 "trsvcid": "4420", 00:15:32.299 "trtype": "TCP" 00:15:32.299 }, 00:15:32.299 "peer_address": { 00:15:32.299 "adrfam": "IPv4", 00:15:32.299 "traddr": "10.0.0.1", 00:15:32.299 "trsvcid": "50248", 00:15:32.299 "trtype": "TCP" 00:15:32.299 }, 00:15:32.299 "qid": 0, 00:15:32.299 "state": "enabled", 00:15:32.299 "thread": "nvmf_tgt_poll_group_000" 00:15:32.299 } 00:15:32.299 ]' 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.299 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.556 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:33.485 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.741 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.742 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.033 00:15:34.033 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.033 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.033 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.291 { 00:15:34.291 "auth": { 00:15:34.291 "dhgroup": "ffdhe2048", 00:15:34.291 "digest": "sha256", 00:15:34.291 "state": "completed" 00:15:34.291 }, 00:15:34.291 "cntlid": 9, 00:15:34.291 "listen_address": { 00:15:34.291 "adrfam": "IPv4", 00:15:34.291 "traddr": "10.0.0.2", 00:15:34.291 "trsvcid": "4420", 00:15:34.291 "trtype": "TCP" 00:15:34.291 }, 00:15:34.291 "peer_address": { 00:15:34.291 "adrfam": "IPv4", 00:15:34.291 "traddr": "10.0.0.1", 00:15:34.291 "trsvcid": "50284", 00:15:34.291 "trtype": "TCP" 00:15:34.291 }, 00:15:34.291 "qid": 0, 00:15:34.291 "state": "enabled", 00:15:34.291 "thread": "nvmf_tgt_poll_group_000" 00:15:34.291 } 00:15:34.291 ]' 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.291 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.547 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:35.477 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.478 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.478 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.478 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.735 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.735 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.735 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.993 00:15:35.993 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.993 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.993 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.250 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.250 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.250 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.250 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.251 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.251 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.251 { 00:15:36.251 "auth": { 00:15:36.251 "dhgroup": "ffdhe2048", 00:15:36.251 "digest": "sha256", 00:15:36.251 "state": "completed" 00:15:36.251 }, 00:15:36.251 "cntlid": 11, 00:15:36.251 "listen_address": { 00:15:36.251 "adrfam": "IPv4", 00:15:36.251 "traddr": "10.0.0.2", 00:15:36.251 "trsvcid": "4420", 00:15:36.251 "trtype": "TCP" 00:15:36.251 }, 00:15:36.251 "peer_address": { 00:15:36.251 "adrfam": "IPv4", 00:15:36.251 "traddr": "10.0.0.1", 00:15:36.251 "trsvcid": "49450", 00:15:36.251 "trtype": "TCP" 00:15:36.251 }, 00:15:36.251 "qid": 0, 00:15:36.251 "state": "enabled", 00:15:36.251 "thread": "nvmf_tgt_poll_group_000" 00:15:36.251 } 00:15:36.251 ]' 00:15:36.251 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.251 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.251 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.251 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:36.251 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.510 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.510 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.510 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.767 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.334 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.902 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.161 00:15:38.161 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.161 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.161 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.433 { 00:15:38.433 "auth": { 00:15:38.433 "dhgroup": "ffdhe2048", 00:15:38.433 "digest": "sha256", 00:15:38.433 "state": "completed" 00:15:38.433 }, 00:15:38.433 "cntlid": 13, 00:15:38.433 "listen_address": { 00:15:38.433 "adrfam": "IPv4", 00:15:38.433 "traddr": "10.0.0.2", 00:15:38.433 "trsvcid": "4420", 00:15:38.433 "trtype": "TCP" 00:15:38.433 }, 00:15:38.433 "peer_address": { 00:15:38.433 "adrfam": "IPv4", 00:15:38.433 "traddr": "10.0.0.1", 00:15:38.433 "trsvcid": "49486", 00:15:38.433 "trtype": "TCP" 00:15:38.433 }, 00:15:38.433 "qid": 0, 00:15:38.433 "state": "enabled", 00:15:38.433 "thread": "nvmf_tgt_poll_group_000" 00:15:38.433 } 00:15:38.433 ]' 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.433 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.699 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:39.637 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:39.637 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.201 00:15:40.201 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.201 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.201 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.459 { 00:15:40.459 "auth": { 00:15:40.459 "dhgroup": "ffdhe2048", 00:15:40.459 "digest": "sha256", 00:15:40.459 "state": "completed" 00:15:40.459 }, 00:15:40.459 "cntlid": 15, 00:15:40.459 "listen_address": { 00:15:40.459 "adrfam": "IPv4", 00:15:40.459 "traddr": "10.0.0.2", 00:15:40.459 "trsvcid": "4420", 00:15:40.459 "trtype": "TCP" 00:15:40.459 }, 00:15:40.459 "peer_address": { 00:15:40.459 "adrfam": "IPv4", 00:15:40.459 "traddr": "10.0.0.1", 00:15:40.459 "trsvcid": "49518", 00:15:40.459 "trtype": "TCP" 00:15:40.459 }, 00:15:40.459 "qid": 0, 00:15:40.459 "state": "enabled", 00:15:40.459 "thread": "nvmf_tgt_poll_group_000" 00:15:40.459 } 00:15:40.459 ]' 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.459 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.460 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.460 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.460 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.460 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.460 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.718 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:41.653 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.911 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.169 00:15:42.169 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.169 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.169 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.426 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.426 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.426 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.426 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.426 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.426 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.426 { 00:15:42.426 "auth": { 00:15:42.426 "dhgroup": "ffdhe3072", 00:15:42.426 "digest": "sha256", 00:15:42.426 "state": "completed" 00:15:42.426 }, 00:15:42.426 "cntlid": 17, 00:15:42.426 "listen_address": { 00:15:42.426 "adrfam": "IPv4", 00:15:42.426 "traddr": "10.0.0.2", 00:15:42.426 "trsvcid": "4420", 00:15:42.426 "trtype": "TCP" 00:15:42.426 }, 00:15:42.426 "peer_address": { 00:15:42.426 "adrfam": "IPv4", 00:15:42.426 "traddr": "10.0.0.1", 00:15:42.426 "trsvcid": "49552", 00:15:42.426 "trtype": "TCP" 00:15:42.426 }, 00:15:42.426 "qid": 0, 00:15:42.426 "state": "enabled", 00:15:42.426 "thread": "nvmf_tgt_poll_group_000" 00:15:42.426 } 00:15:42.426 ]' 00:15:42.426 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.683 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.683 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.683 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.683 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.683 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.683 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.683 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.941 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.509 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:44.075 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:44.075 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.075 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.075 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.076 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.334 00:15:44.334 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.334 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.334 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.593 { 00:15:44.593 "auth": { 00:15:44.593 "dhgroup": "ffdhe3072", 00:15:44.593 "digest": "sha256", 00:15:44.593 "state": "completed" 00:15:44.593 }, 00:15:44.593 "cntlid": 19, 00:15:44.593 "listen_address": { 00:15:44.593 "adrfam": "IPv4", 00:15:44.593 "traddr": "10.0.0.2", 00:15:44.593 "trsvcid": "4420", 00:15:44.593 "trtype": "TCP" 00:15:44.593 }, 00:15:44.593 "peer_address": { 00:15:44.593 "adrfam": "IPv4", 00:15:44.593 "traddr": "10.0.0.1", 00:15:44.593 "trsvcid": "49578", 00:15:44.593 "trtype": "TCP" 00:15:44.593 }, 00:15:44.593 "qid": 0, 00:15:44.593 "state": "enabled", 00:15:44.593 "thread": "nvmf_tgt_poll_group_000" 00:15:44.593 } 00:15:44.593 ]' 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.593 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.593 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.593 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.593 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.593 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.593 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.159 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.726 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.984 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.985 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.242 00:15:46.501 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.501 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.501 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.758 { 00:15:46.758 "auth": { 00:15:46.758 "dhgroup": "ffdhe3072", 00:15:46.758 "digest": "sha256", 00:15:46.758 "state": "completed" 00:15:46.758 }, 00:15:46.758 "cntlid": 21, 00:15:46.758 "listen_address": { 00:15:46.758 "adrfam": "IPv4", 00:15:46.758 "traddr": "10.0.0.2", 00:15:46.758 "trsvcid": "4420", 00:15:46.758 "trtype": "TCP" 00:15:46.758 }, 00:15:46.758 "peer_address": { 00:15:46.758 "adrfam": "IPv4", 00:15:46.758 "traddr": "10.0.0.1", 00:15:46.758 "trsvcid": "58274", 00:15:46.758 "trtype": "TCP" 00:15:46.758 }, 00:15:46.758 "qid": 0, 00:15:46.758 "state": "enabled", 00:15:46.758 "thread": "nvmf_tgt_poll_group_000" 00:15:46.758 } 00:15:46.758 ]' 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.758 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.015 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.950 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.207 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.464 00:15:48.464 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.464 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.464 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.028 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.028 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.028 18:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.028 18:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.028 18:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.028 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.028 { 00:15:49.028 "auth": { 00:15:49.028 "dhgroup": "ffdhe3072", 00:15:49.029 "digest": "sha256", 00:15:49.029 "state": "completed" 00:15:49.029 }, 00:15:49.029 "cntlid": 23, 00:15:49.029 "listen_address": { 00:15:49.029 "adrfam": "IPv4", 00:15:49.029 "traddr": "10.0.0.2", 00:15:49.029 "trsvcid": "4420", 00:15:49.029 "trtype": "TCP" 00:15:49.029 }, 00:15:49.029 "peer_address": { 00:15:49.029 "adrfam": "IPv4", 00:15:49.029 "traddr": "10.0.0.1", 00:15:49.029 "trsvcid": "58288", 00:15:49.029 "trtype": "TCP" 00:15:49.029 }, 00:15:49.029 "qid": 0, 00:15:49.029 "state": "enabled", 00:15:49.029 "thread": "nvmf_tgt_poll_group_000" 00:15:49.029 } 00:15:49.029 ]' 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.029 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.286 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:50.237 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.495 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.753 00:15:50.753 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.753 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.753 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.012 { 00:15:51.012 "auth": { 00:15:51.012 "dhgroup": "ffdhe4096", 00:15:51.012 "digest": "sha256", 00:15:51.012 "state": "completed" 00:15:51.012 }, 00:15:51.012 "cntlid": 25, 00:15:51.012 "listen_address": { 00:15:51.012 "adrfam": "IPv4", 00:15:51.012 "traddr": "10.0.0.2", 00:15:51.012 "trsvcid": "4420", 00:15:51.012 "trtype": "TCP" 00:15:51.012 }, 00:15:51.012 "peer_address": { 00:15:51.012 "adrfam": "IPv4", 00:15:51.012 "traddr": "10.0.0.1", 00:15:51.012 "trsvcid": "58324", 00:15:51.012 "trtype": "TCP" 00:15:51.012 }, 00:15:51.012 "qid": 0, 00:15:51.012 "state": "enabled", 00:15:51.012 "thread": "nvmf_tgt_poll_group_000" 00:15:51.012 } 00:15:51.012 ]' 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.012 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.271 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.271 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.271 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.529 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.123 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.380 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.946 00:15:52.946 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.946 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.946 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.217 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.218 { 00:15:53.218 "auth": { 00:15:53.218 "dhgroup": "ffdhe4096", 00:15:53.218 "digest": "sha256", 00:15:53.218 "state": "completed" 00:15:53.218 }, 00:15:53.218 "cntlid": 27, 00:15:53.218 "listen_address": { 00:15:53.218 "adrfam": "IPv4", 00:15:53.218 "traddr": "10.0.0.2", 00:15:53.218 "trsvcid": "4420", 00:15:53.218 "trtype": "TCP" 00:15:53.218 }, 00:15:53.218 "peer_address": { 00:15:53.218 "adrfam": "IPv4", 00:15:53.218 "traddr": "10.0.0.1", 00:15:53.218 "trsvcid": "58358", 00:15:53.218 "trtype": "TCP" 00:15:53.218 }, 00:15:53.218 "qid": 0, 00:15:53.218 "state": "enabled", 00:15:53.218 "thread": "nvmf_tgt_poll_group_000" 00:15:53.218 } 00:15:53.218 ]' 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.218 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.786 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.405 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.663 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.921 00:15:54.921 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.921 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.921 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.179 { 00:15:55.179 "auth": { 00:15:55.179 "dhgroup": "ffdhe4096", 00:15:55.179 "digest": "sha256", 00:15:55.179 "state": "completed" 00:15:55.179 }, 00:15:55.179 "cntlid": 29, 00:15:55.179 "listen_address": { 00:15:55.179 "adrfam": "IPv4", 00:15:55.179 "traddr": "10.0.0.2", 00:15:55.179 "trsvcid": "4420", 00:15:55.179 "trtype": "TCP" 00:15:55.179 }, 00:15:55.179 "peer_address": { 00:15:55.179 "adrfam": "IPv4", 00:15:55.179 "traddr": "10.0.0.1", 00:15:55.179 "trsvcid": "58388", 00:15:55.179 "trtype": "TCP" 00:15:55.179 }, 00:15:55.179 "qid": 0, 00:15:55.179 "state": "enabled", 00:15:55.179 "thread": "nvmf_tgt_poll_group_000" 00:15:55.179 } 00:15:55.179 ]' 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.179 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.437 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.437 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.437 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.695 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.260 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.828 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.087 00:15:57.087 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.087 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.087 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.346 { 00:15:57.346 "auth": { 00:15:57.346 "dhgroup": "ffdhe4096", 00:15:57.346 "digest": "sha256", 00:15:57.346 "state": "completed" 00:15:57.346 }, 00:15:57.346 "cntlid": 31, 00:15:57.346 "listen_address": { 00:15:57.346 "adrfam": "IPv4", 00:15:57.346 "traddr": "10.0.0.2", 00:15:57.346 "trsvcid": "4420", 00:15:57.346 "trtype": "TCP" 00:15:57.346 }, 00:15:57.346 "peer_address": { 00:15:57.346 "adrfam": "IPv4", 00:15:57.346 "traddr": "10.0.0.1", 00:15:57.346 "trsvcid": "58714", 00:15:57.346 "trtype": "TCP" 00:15:57.346 }, 00:15:57.346 "qid": 0, 00:15:57.346 "state": "enabled", 00:15:57.346 "thread": "nvmf_tgt_poll_group_000" 00:15:57.346 } 00:15:57.346 ]' 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.346 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.604 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.604 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.604 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.873 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.463 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.721 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.285 00:15:59.285 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.285 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.285 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.543 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.543 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.543 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.543 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.543 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.543 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.543 { 00:15:59.543 "auth": { 00:15:59.543 "dhgroup": "ffdhe6144", 00:15:59.543 "digest": "sha256", 00:15:59.543 "state": "completed" 00:15:59.543 }, 00:15:59.543 "cntlid": 33, 00:15:59.543 "listen_address": { 00:15:59.543 "adrfam": "IPv4", 00:15:59.543 "traddr": "10.0.0.2", 00:15:59.543 "trsvcid": "4420", 00:15:59.543 "trtype": "TCP" 00:15:59.543 }, 00:15:59.543 "peer_address": { 00:15:59.543 "adrfam": "IPv4", 00:15:59.543 "traddr": "10.0.0.1", 00:15:59.543 "trsvcid": "58738", 00:15:59.543 "trtype": "TCP" 00:15:59.543 }, 00:15:59.543 "qid": 0, 00:15:59.543 "state": "enabled", 00:15:59.543 "thread": "nvmf_tgt_poll_group_000" 00:15:59.543 } 00:15:59.543 ]' 00:15:59.543 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.543 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.543 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.802 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.802 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.802 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.802 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.802 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.060 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.626 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.883 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.448 00:16:01.448 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.448 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.448 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.707 { 00:16:01.707 "auth": { 00:16:01.707 "dhgroup": "ffdhe6144", 00:16:01.707 "digest": "sha256", 00:16:01.707 "state": "completed" 00:16:01.707 }, 00:16:01.707 "cntlid": 35, 00:16:01.707 "listen_address": { 00:16:01.707 "adrfam": "IPv4", 00:16:01.707 "traddr": "10.0.0.2", 00:16:01.707 "trsvcid": "4420", 00:16:01.707 "trtype": "TCP" 00:16:01.707 }, 00:16:01.707 "peer_address": { 00:16:01.707 "adrfam": "IPv4", 00:16:01.707 "traddr": "10.0.0.1", 00:16:01.707 "trsvcid": "58756", 00:16:01.707 "trtype": "TCP" 00:16:01.707 }, 00:16:01.707 "qid": 0, 00:16:01.707 "state": "enabled", 00:16:01.707 "thread": "nvmf_tgt_poll_group_000" 00:16:01.707 } 00:16:01.707 ]' 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.707 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.965 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.965 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.965 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.223 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.854 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.112 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.370 00:16:03.370 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.370 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.370 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.627 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.627 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.627 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.627 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.627 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.627 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.627 { 00:16:03.627 "auth": { 00:16:03.627 "dhgroup": "ffdhe6144", 00:16:03.627 "digest": "sha256", 00:16:03.627 "state": "completed" 00:16:03.627 }, 00:16:03.627 "cntlid": 37, 00:16:03.627 "listen_address": { 00:16:03.627 "adrfam": "IPv4", 00:16:03.627 "traddr": "10.0.0.2", 00:16:03.627 "trsvcid": "4420", 00:16:03.627 "trtype": "TCP" 00:16:03.627 }, 00:16:03.627 "peer_address": { 00:16:03.627 "adrfam": "IPv4", 00:16:03.627 "traddr": "10.0.0.1", 00:16:03.627 "trsvcid": "58794", 00:16:03.627 "trtype": "TCP" 00:16:03.627 }, 00:16:03.627 "qid": 0, 00:16:03.627 "state": "enabled", 00:16:03.627 "thread": "nvmf_tgt_poll_group_000" 00:16:03.627 } 00:16:03.627 ]' 00:16:03.627 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.885 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.885 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.885 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.885 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.885 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.885 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.886 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.144 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:16:04.708 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.708 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:04.708 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.708 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.708 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.708 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.708 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.708 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.965 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.966 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.529 00:16:05.529 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.529 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.529 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.786 { 00:16:05.786 "auth": { 00:16:05.786 "dhgroup": "ffdhe6144", 00:16:05.786 "digest": "sha256", 00:16:05.786 "state": "completed" 00:16:05.786 }, 00:16:05.786 "cntlid": 39, 00:16:05.786 "listen_address": { 00:16:05.786 "adrfam": "IPv4", 00:16:05.786 "traddr": "10.0.0.2", 00:16:05.786 "trsvcid": "4420", 00:16:05.786 "trtype": "TCP" 00:16:05.786 }, 00:16:05.786 "peer_address": { 00:16:05.786 "adrfam": "IPv4", 00:16:05.786 "traddr": "10.0.0.1", 00:16:05.786 "trsvcid": "58810", 00:16:05.786 "trtype": "TCP" 00:16:05.786 }, 00:16:05.786 "qid": 0, 00:16:05.786 "state": "enabled", 00:16:05.786 "thread": "nvmf_tgt_poll_group_000" 00:16:05.786 } 00:16:05.786 ]' 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.786 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.043 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.977 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:07.234 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:07.234 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.234 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.234 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:07.234 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.234 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.234 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.235 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.235 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.235 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.235 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.235 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.813 00:16:07.813 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.813 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.813 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.071 { 00:16:08.071 "auth": { 00:16:08.071 "dhgroup": "ffdhe8192", 00:16:08.071 "digest": "sha256", 00:16:08.071 "state": "completed" 00:16:08.071 }, 00:16:08.071 "cntlid": 41, 00:16:08.071 "listen_address": { 00:16:08.071 "adrfam": "IPv4", 00:16:08.071 "traddr": "10.0.0.2", 00:16:08.071 "trsvcid": "4420", 00:16:08.071 "trtype": "TCP" 00:16:08.071 }, 00:16:08.071 "peer_address": { 00:16:08.071 "adrfam": "IPv4", 00:16:08.071 "traddr": "10.0.0.1", 00:16:08.071 "trsvcid": "53208", 00:16:08.071 "trtype": "TCP" 00:16:08.071 }, 00:16:08.071 "qid": 0, 00:16:08.071 "state": "enabled", 00:16:08.071 "thread": "nvmf_tgt_poll_group_000" 00:16:08.071 } 00:16:08.071 ]' 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.071 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.329 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.292 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.550 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.116 00:16:10.116 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.116 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.116 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.375 { 00:16:10.375 "auth": { 00:16:10.375 "dhgroup": "ffdhe8192", 00:16:10.375 "digest": "sha256", 00:16:10.375 "state": "completed" 00:16:10.375 }, 00:16:10.375 "cntlid": 43, 00:16:10.375 "listen_address": { 00:16:10.375 "adrfam": "IPv4", 00:16:10.375 "traddr": "10.0.0.2", 00:16:10.375 "trsvcid": "4420", 00:16:10.375 "trtype": "TCP" 00:16:10.375 }, 00:16:10.375 "peer_address": { 00:16:10.375 "adrfam": "IPv4", 00:16:10.375 "traddr": "10.0.0.1", 00:16:10.375 "trsvcid": "53240", 00:16:10.375 "trtype": "TCP" 00:16:10.375 }, 00:16:10.375 "qid": 0, 00:16:10.375 "state": "enabled", 00:16:10.375 "thread": "nvmf_tgt_poll_group_000" 00:16:10.375 } 00:16:10.375 ]' 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.375 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.633 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.633 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.633 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.892 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.458 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.715 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.280 00:16:12.280 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.280 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.280 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.542 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.810 { 00:16:12.810 "auth": { 00:16:12.810 "dhgroup": "ffdhe8192", 00:16:12.810 "digest": "sha256", 00:16:12.810 "state": "completed" 00:16:12.810 }, 00:16:12.810 "cntlid": 45, 00:16:12.810 "listen_address": { 00:16:12.810 "adrfam": "IPv4", 00:16:12.810 "traddr": "10.0.0.2", 00:16:12.810 "trsvcid": "4420", 00:16:12.810 "trtype": "TCP" 00:16:12.810 }, 00:16:12.810 "peer_address": { 00:16:12.810 "adrfam": "IPv4", 00:16:12.810 "traddr": "10.0.0.1", 00:16:12.810 "trsvcid": "53266", 00:16:12.810 "trtype": "TCP" 00:16:12.810 }, 00:16:12.810 "qid": 0, 00:16:12.810 "state": "enabled", 00:16:12.810 "thread": "nvmf_tgt_poll_group_000" 00:16:12.810 } 00:16:12.810 ]' 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.810 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.068 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:16:13.635 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.893 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:13.893 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.893 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.893 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.893 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.893 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.893 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.151 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.717 00:16:14.717 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.717 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.717 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.975 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.975 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.975 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.975 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.975 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.975 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.975 { 00:16:14.975 "auth": { 00:16:14.975 "dhgroup": "ffdhe8192", 00:16:14.975 "digest": "sha256", 00:16:14.975 "state": "completed" 00:16:14.975 }, 00:16:14.975 "cntlid": 47, 00:16:14.975 "listen_address": { 00:16:14.975 "adrfam": "IPv4", 00:16:14.975 "traddr": "10.0.0.2", 00:16:14.975 "trsvcid": "4420", 00:16:14.975 "trtype": "TCP" 00:16:14.975 }, 00:16:14.975 "peer_address": { 00:16:14.975 "adrfam": "IPv4", 00:16:14.975 "traddr": "10.0.0.1", 00:16:14.975 "trsvcid": "53286", 00:16:14.975 "trtype": "TCP" 00:16:14.975 }, 00:16:14.975 "qid": 0, 00:16:14.975 "state": "enabled", 00:16:14.975 "thread": "nvmf_tgt_poll_group_000" 00:16:14.975 } 00:16:14.975 ]' 00:16:14.975 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.233 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.233 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.233 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.233 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.233 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.233 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.233 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.493 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:16:16.059 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.316 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:16.316 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.316 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.317 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.317 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:16.317 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.317 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.317 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.317 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.575 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.576 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.576 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.895 00:16:16.895 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.895 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.895 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.168 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.168 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.168 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.168 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.168 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.168 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.168 { 00:16:17.168 "auth": { 00:16:17.168 "dhgroup": "null", 00:16:17.168 "digest": "sha384", 00:16:17.168 "state": "completed" 00:16:17.168 }, 00:16:17.168 "cntlid": 49, 00:16:17.168 "listen_address": { 00:16:17.168 "adrfam": "IPv4", 00:16:17.168 "traddr": "10.0.0.2", 00:16:17.168 "trsvcid": "4420", 00:16:17.168 "trtype": "TCP" 00:16:17.168 }, 00:16:17.168 "peer_address": { 00:16:17.168 "adrfam": "IPv4", 00:16:17.168 "traddr": "10.0.0.1", 00:16:17.168 "trsvcid": "57524", 00:16:17.168 "trtype": "TCP" 00:16:17.168 }, 00:16:17.168 "qid": 0, 00:16:17.168 "state": "enabled", 00:16:17.168 "thread": "nvmf_tgt_poll_group_000" 00:16:17.168 } 00:16:17.168 ]' 00:16:17.168 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.427 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.427 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.427 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:17.427 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.427 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.427 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.427 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.685 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.250 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.508 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.766 00:16:18.766 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.766 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.766 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.023 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.023 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.023 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.023 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.023 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.023 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.023 { 00:16:19.023 "auth": { 00:16:19.023 "dhgroup": "null", 00:16:19.023 "digest": "sha384", 00:16:19.023 "state": "completed" 00:16:19.023 }, 00:16:19.023 "cntlid": 51, 00:16:19.023 "listen_address": { 00:16:19.023 "adrfam": "IPv4", 00:16:19.023 "traddr": "10.0.0.2", 00:16:19.023 "trsvcid": "4420", 00:16:19.023 "trtype": "TCP" 00:16:19.023 }, 00:16:19.023 "peer_address": { 00:16:19.023 "adrfam": "IPv4", 00:16:19.023 "traddr": "10.0.0.1", 00:16:19.023 "trsvcid": "57540", 00:16:19.023 "trtype": "TCP" 00:16:19.023 }, 00:16:19.023 "qid": 0, 00:16:19.023 "state": "enabled", 00:16:19.023 "thread": "nvmf_tgt_poll_group_000" 00:16:19.023 } 00:16:19.023 ]' 00:16:19.023 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.024 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.024 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.281 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:19.281 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.281 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.281 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.281 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.540 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.105 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.363 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.620 00:16:20.878 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.878 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.878 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.135 { 00:16:21.135 "auth": { 00:16:21.135 "dhgroup": "null", 00:16:21.135 "digest": "sha384", 00:16:21.135 "state": "completed" 00:16:21.135 }, 00:16:21.135 "cntlid": 53, 00:16:21.135 "listen_address": { 00:16:21.135 "adrfam": "IPv4", 00:16:21.135 "traddr": "10.0.0.2", 00:16:21.135 "trsvcid": "4420", 00:16:21.135 "trtype": "TCP" 00:16:21.135 }, 00:16:21.135 "peer_address": { 00:16:21.135 "adrfam": "IPv4", 00:16:21.135 "traddr": "10.0.0.1", 00:16:21.135 "trsvcid": "57578", 00:16:21.135 "trtype": "TCP" 00:16:21.135 }, 00:16:21.135 "qid": 0, 00:16:21.135 "state": "enabled", 00:16:21.135 "thread": "nvmf_tgt_poll_group_000" 00:16:21.135 } 00:16:21.135 ]' 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.135 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.699 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.264 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.521 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.810 00:16:22.810 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.810 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.810 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.069 { 00:16:23.069 "auth": { 00:16:23.069 "dhgroup": "null", 00:16:23.069 "digest": "sha384", 00:16:23.069 "state": "completed" 00:16:23.069 }, 00:16:23.069 "cntlid": 55, 00:16:23.069 "listen_address": { 00:16:23.069 "adrfam": "IPv4", 00:16:23.069 "traddr": "10.0.0.2", 00:16:23.069 "trsvcid": "4420", 00:16:23.069 "trtype": "TCP" 00:16:23.069 }, 00:16:23.069 "peer_address": { 00:16:23.069 "adrfam": "IPv4", 00:16:23.069 "traddr": "10.0.0.1", 00:16:23.069 "trsvcid": "57610", 00:16:23.069 "trtype": "TCP" 00:16:23.069 }, 00:16:23.069 "qid": 0, 00:16:23.069 "state": "enabled", 00:16:23.069 "thread": "nvmf_tgt_poll_group_000" 00:16:23.069 } 00:16:23.069 ]' 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:23.069 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.326 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.326 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.326 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.583 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.148 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.405 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.663 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.921 00:16:24.921 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.921 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.921 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.179 { 00:16:25.179 "auth": { 00:16:25.179 "dhgroup": "ffdhe2048", 00:16:25.179 "digest": "sha384", 00:16:25.179 "state": "completed" 00:16:25.179 }, 00:16:25.179 "cntlid": 57, 00:16:25.179 "listen_address": { 00:16:25.179 "adrfam": "IPv4", 00:16:25.179 "traddr": "10.0.0.2", 00:16:25.179 "trsvcid": "4420", 00:16:25.179 "trtype": "TCP" 00:16:25.179 }, 00:16:25.179 "peer_address": { 00:16:25.179 "adrfam": "IPv4", 00:16:25.179 "traddr": "10.0.0.1", 00:16:25.179 "trsvcid": "57644", 00:16:25.179 "trtype": "TCP" 00:16:25.179 }, 00:16:25.179 "qid": 0, 00:16:25.179 "state": "enabled", 00:16:25.179 "thread": "nvmf_tgt_poll_group_000" 00:16:25.179 } 00:16:25.179 ]' 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.179 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.437 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.437 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.437 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.695 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.259 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.516 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.773 00:16:26.773 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.773 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.773 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.338 { 00:16:27.338 "auth": { 00:16:27.338 "dhgroup": "ffdhe2048", 00:16:27.338 "digest": "sha384", 00:16:27.338 "state": "completed" 00:16:27.338 }, 00:16:27.338 "cntlid": 59, 00:16:27.338 "listen_address": { 00:16:27.338 "adrfam": "IPv4", 00:16:27.338 "traddr": "10.0.0.2", 00:16:27.338 "trsvcid": "4420", 00:16:27.338 "trtype": "TCP" 00:16:27.338 }, 00:16:27.338 "peer_address": { 00:16:27.338 "adrfam": "IPv4", 00:16:27.338 "traddr": "10.0.0.1", 00:16:27.338 "trsvcid": "54834", 00:16:27.338 "trtype": "TCP" 00:16:27.338 }, 00:16:27.338 "qid": 0, 00:16:27.338 "state": "enabled", 00:16:27.338 "thread": "nvmf_tgt_poll_group_000" 00:16:27.338 } 00:16:27.338 ]' 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.338 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.596 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.528 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.785 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.041 00:16:29.041 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.041 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.041 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.297 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.297 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.297 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.297 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.297 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.297 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.297 { 00:16:29.297 "auth": { 00:16:29.297 "dhgroup": "ffdhe2048", 00:16:29.297 "digest": "sha384", 00:16:29.297 "state": "completed" 00:16:29.297 }, 00:16:29.297 "cntlid": 61, 00:16:29.297 "listen_address": { 00:16:29.297 "adrfam": "IPv4", 00:16:29.297 "traddr": "10.0.0.2", 00:16:29.297 "trsvcid": "4420", 00:16:29.297 "trtype": "TCP" 00:16:29.297 }, 00:16:29.297 "peer_address": { 00:16:29.297 "adrfam": "IPv4", 00:16:29.297 "traddr": "10.0.0.1", 00:16:29.297 "trsvcid": "54866", 00:16:29.297 "trtype": "TCP" 00:16:29.297 }, 00:16:29.297 "qid": 0, 00:16:29.297 "state": "enabled", 00:16:29.297 "thread": "nvmf_tgt_poll_group_000" 00:16:29.297 } 00:16:29.297 ]' 00:16:29.297 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.555 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.555 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.555 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.555 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.555 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.555 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.555 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.812 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.378 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.636 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.894 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.894 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.894 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.152 00:16:31.152 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.152 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.152 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.409 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.409 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.409 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.409 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.409 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.409 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.409 { 00:16:31.409 "auth": { 00:16:31.409 "dhgroup": "ffdhe2048", 00:16:31.409 "digest": "sha384", 00:16:31.409 "state": "completed" 00:16:31.409 }, 00:16:31.409 "cntlid": 63, 00:16:31.410 "listen_address": { 00:16:31.410 "adrfam": "IPv4", 00:16:31.410 "traddr": "10.0.0.2", 00:16:31.410 "trsvcid": "4420", 00:16:31.410 "trtype": "TCP" 00:16:31.410 }, 00:16:31.410 "peer_address": { 00:16:31.410 "adrfam": "IPv4", 00:16:31.410 "traddr": "10.0.0.1", 00:16:31.410 "trsvcid": "54888", 00:16:31.410 "trtype": "TCP" 00:16:31.410 }, 00:16:31.410 "qid": 0, 00:16:31.410 "state": "enabled", 00:16:31.410 "thread": "nvmf_tgt_poll_group_000" 00:16:31.410 } 00:16:31.410 ]' 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.410 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.973 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.537 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.538 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.795 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.052 00:16:33.053 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.053 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.053 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.310 { 00:16:33.310 "auth": { 00:16:33.310 "dhgroup": "ffdhe3072", 00:16:33.310 "digest": "sha384", 00:16:33.310 "state": "completed" 00:16:33.310 }, 00:16:33.310 "cntlid": 65, 00:16:33.310 "listen_address": { 00:16:33.310 "adrfam": "IPv4", 00:16:33.310 "traddr": "10.0.0.2", 00:16:33.310 "trsvcid": "4420", 00:16:33.310 "trtype": "TCP" 00:16:33.310 }, 00:16:33.310 "peer_address": { 00:16:33.310 "adrfam": "IPv4", 00:16:33.310 "traddr": "10.0.0.1", 00:16:33.310 "trsvcid": "54906", 00:16:33.310 "trtype": "TCP" 00:16:33.310 }, 00:16:33.310 "qid": 0, 00:16:33.310 "state": "enabled", 00:16:33.310 "thread": "nvmf_tgt_poll_group_000" 00:16:33.310 } 00:16:33.310 ]' 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.310 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.568 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:34.499 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.499 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:34.499 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.499 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.500 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.064 00:16:35.065 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.065 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.065 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.323 { 00:16:35.323 "auth": { 00:16:35.323 "dhgroup": "ffdhe3072", 00:16:35.323 "digest": "sha384", 00:16:35.323 "state": "completed" 00:16:35.323 }, 00:16:35.323 "cntlid": 67, 00:16:35.323 "listen_address": { 00:16:35.323 "adrfam": "IPv4", 00:16:35.323 "traddr": "10.0.0.2", 00:16:35.323 "trsvcid": "4420", 00:16:35.323 "trtype": "TCP" 00:16:35.323 }, 00:16:35.323 "peer_address": { 00:16:35.323 "adrfam": "IPv4", 00:16:35.323 "traddr": "10.0.0.1", 00:16:35.323 "trsvcid": "54940", 00:16:35.323 "trtype": "TCP" 00:16:35.323 }, 00:16:35.323 "qid": 0, 00:16:35.323 "state": "enabled", 00:16:35.323 "thread": "nvmf_tgt_poll_group_000" 00:16:35.323 } 00:16:35.323 ]' 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.323 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.581 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:36.229 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.487 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.745 00:16:36.745 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.745 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.745 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.005 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.005 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.005 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.005 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.005 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.005 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.005 { 00:16:37.005 "auth": { 00:16:37.005 "dhgroup": "ffdhe3072", 00:16:37.005 "digest": "sha384", 00:16:37.005 "state": "completed" 00:16:37.005 }, 00:16:37.005 "cntlid": 69, 00:16:37.005 "listen_address": { 00:16:37.005 "adrfam": "IPv4", 00:16:37.005 "traddr": "10.0.0.2", 00:16:37.005 "trsvcid": "4420", 00:16:37.005 "trtype": "TCP" 00:16:37.005 }, 00:16:37.005 "peer_address": { 00:16:37.005 "adrfam": "IPv4", 00:16:37.005 "traddr": "10.0.0.1", 00:16:37.005 "trsvcid": "54958", 00:16:37.005 "trtype": "TCP" 00:16:37.005 }, 00:16:37.005 "qid": 0, 00:16:37.005 "state": "enabled", 00:16:37.005 "thread": "nvmf_tgt_poll_group_000" 00:16:37.005 } 00:16:37.005 ]' 00:16:37.005 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.264 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.264 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.264 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.264 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.264 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.264 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.264 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.521 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:16:38.085 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.086 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:38.086 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.086 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.086 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.086 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.086 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.086 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.343 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.909 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.909 { 00:16:38.909 "auth": { 00:16:38.909 "dhgroup": "ffdhe3072", 00:16:38.909 "digest": "sha384", 00:16:38.909 "state": "completed" 00:16:38.909 }, 00:16:38.909 "cntlid": 71, 00:16:38.909 "listen_address": { 00:16:38.909 "adrfam": "IPv4", 00:16:38.909 "traddr": "10.0.0.2", 00:16:38.909 "trsvcid": "4420", 00:16:38.909 "trtype": "TCP" 00:16:38.909 }, 00:16:38.909 "peer_address": { 00:16:38.909 "adrfam": "IPv4", 00:16:38.909 "traddr": "10.0.0.1", 00:16:38.909 "trsvcid": "54978", 00:16:38.909 "trtype": "TCP" 00:16:38.909 }, 00:16:38.909 "qid": 0, 00:16:38.909 "state": "enabled", 00:16:38.909 "thread": "nvmf_tgt_poll_group_000" 00:16:38.909 } 00:16:38.909 ]' 00:16:38.909 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.167 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.167 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.167 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.167 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.167 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.167 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.167 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.424 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.396 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.963 00:16:40.963 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.963 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.963 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.221 { 00:16:41.221 "auth": { 00:16:41.221 "dhgroup": "ffdhe4096", 00:16:41.221 "digest": "sha384", 00:16:41.221 "state": "completed" 00:16:41.221 }, 00:16:41.221 "cntlid": 73, 00:16:41.221 "listen_address": { 00:16:41.221 "adrfam": "IPv4", 00:16:41.221 "traddr": "10.0.0.2", 00:16:41.221 "trsvcid": "4420", 00:16:41.221 "trtype": "TCP" 00:16:41.221 }, 00:16:41.221 "peer_address": { 00:16:41.221 "adrfam": "IPv4", 00:16:41.221 "traddr": "10.0.0.1", 00:16:41.221 "trsvcid": "55006", 00:16:41.221 "trtype": "TCP" 00:16:41.221 }, 00:16:41.221 "qid": 0, 00:16:41.221 "state": "enabled", 00:16:41.221 "thread": "nvmf_tgt_poll_group_000" 00:16:41.221 } 00:16:41.221 ]' 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.221 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.479 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.046 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.305 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.870 00:16:42.870 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.870 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.870 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.128 { 00:16:43.128 "auth": { 00:16:43.128 "dhgroup": "ffdhe4096", 00:16:43.128 "digest": "sha384", 00:16:43.128 "state": "completed" 00:16:43.128 }, 00:16:43.128 "cntlid": 75, 00:16:43.128 "listen_address": { 00:16:43.128 "adrfam": "IPv4", 00:16:43.128 "traddr": "10.0.0.2", 00:16:43.128 "trsvcid": "4420", 00:16:43.128 "trtype": "TCP" 00:16:43.128 }, 00:16:43.128 "peer_address": { 00:16:43.128 "adrfam": "IPv4", 00:16:43.128 "traddr": "10.0.0.1", 00:16:43.128 "trsvcid": "55040", 00:16:43.128 "trtype": "TCP" 00:16:43.128 }, 00:16:43.128 "qid": 0, 00:16:43.128 "state": "enabled", 00:16:43.128 "thread": "nvmf_tgt_poll_group_000" 00:16:43.128 } 00:16:43.128 ]' 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.128 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.385 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.385 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.385 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.661 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:16:44.242 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.242 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:44.242 18:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.242 18:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.500 18:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.500 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.500 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.500 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.759 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.017 00:16:45.017 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.017 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.017 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.275 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.275 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.275 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.275 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.275 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.275 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.275 { 00:16:45.275 "auth": { 00:16:45.275 "dhgroup": "ffdhe4096", 00:16:45.275 "digest": "sha384", 00:16:45.275 "state": "completed" 00:16:45.275 }, 00:16:45.275 "cntlid": 77, 00:16:45.275 "listen_address": { 00:16:45.275 "adrfam": "IPv4", 00:16:45.275 "traddr": "10.0.0.2", 00:16:45.275 "trsvcid": "4420", 00:16:45.275 "trtype": "TCP" 00:16:45.275 }, 00:16:45.275 "peer_address": { 00:16:45.275 "adrfam": "IPv4", 00:16:45.275 "traddr": "10.0.0.1", 00:16:45.275 "trsvcid": "55062", 00:16:45.275 "trtype": "TCP" 00:16:45.275 }, 00:16:45.275 "qid": 0, 00:16:45.275 "state": "enabled", 00:16:45.275 "thread": "nvmf_tgt_poll_group_000" 00:16:45.275 } 00:16:45.275 ]' 00:16:45.275 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.534 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.534 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.534 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.534 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.534 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.534 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.534 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.792 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.727 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.986 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.243 00:16:47.243 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.243 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.243 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.501 { 00:16:47.501 "auth": { 00:16:47.501 "dhgroup": "ffdhe4096", 00:16:47.501 "digest": "sha384", 00:16:47.501 "state": "completed" 00:16:47.501 }, 00:16:47.501 "cntlid": 79, 00:16:47.501 "listen_address": { 00:16:47.501 "adrfam": "IPv4", 00:16:47.501 "traddr": "10.0.0.2", 00:16:47.501 "trsvcid": "4420", 00:16:47.501 "trtype": "TCP" 00:16:47.501 }, 00:16:47.501 "peer_address": { 00:16:47.501 "adrfam": "IPv4", 00:16:47.501 "traddr": "10.0.0.1", 00:16:47.501 "trsvcid": "39078", 00:16:47.501 "trtype": "TCP" 00:16:47.501 }, 00:16:47.501 "qid": 0, 00:16:47.501 "state": "enabled", 00:16:47.501 "thread": "nvmf_tgt_poll_group_000" 00:16:47.501 } 00:16:47.501 ]' 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.501 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.759 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.759 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.759 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.759 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.759 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.017 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.953 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.211 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.211 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.211 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.469 00:16:49.469 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.469 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.469 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.728 { 00:16:49.728 "auth": { 00:16:49.728 "dhgroup": "ffdhe6144", 00:16:49.728 "digest": "sha384", 00:16:49.728 "state": "completed" 00:16:49.728 }, 00:16:49.728 "cntlid": 81, 00:16:49.728 "listen_address": { 00:16:49.728 "adrfam": "IPv4", 00:16:49.728 "traddr": "10.0.0.2", 00:16:49.728 "trsvcid": "4420", 00:16:49.728 "trtype": "TCP" 00:16:49.728 }, 00:16:49.728 "peer_address": { 00:16:49.728 "adrfam": "IPv4", 00:16:49.728 "traddr": "10.0.0.1", 00:16:49.728 "trsvcid": "39108", 00:16:49.728 "trtype": "TCP" 00:16:49.728 }, 00:16:49.728 "qid": 0, 00:16:49.728 "state": "enabled", 00:16:49.728 "thread": "nvmf_tgt_poll_group_000" 00:16:49.728 } 00:16:49.728 ]' 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.728 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.986 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.986 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.986 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.986 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.986 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.244 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:51.179 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.180 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:51.180 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.180 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.180 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.180 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.180 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.180 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.437 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.694 00:16:51.951 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.951 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.951 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.209 { 00:16:52.209 "auth": { 00:16:52.209 "dhgroup": "ffdhe6144", 00:16:52.209 "digest": "sha384", 00:16:52.209 "state": "completed" 00:16:52.209 }, 00:16:52.209 "cntlid": 83, 00:16:52.209 "listen_address": { 00:16:52.209 "adrfam": "IPv4", 00:16:52.209 "traddr": "10.0.0.2", 00:16:52.209 "trsvcid": "4420", 00:16:52.209 "trtype": "TCP" 00:16:52.209 }, 00:16:52.209 "peer_address": { 00:16:52.209 "adrfam": "IPv4", 00:16:52.209 "traddr": "10.0.0.1", 00:16:52.209 "trsvcid": "39134", 00:16:52.209 "trtype": "TCP" 00:16:52.209 }, 00:16:52.209 "qid": 0, 00:16:52.209 "state": "enabled", 00:16:52.209 "thread": "nvmf_tgt_poll_group_000" 00:16:52.209 } 00:16:52.209 ]' 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.209 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.466 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.399 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.400 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.400 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.400 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.400 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.400 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.987 00:16:53.987 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.987 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.987 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.284 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.284 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.284 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.285 { 00:16:54.285 "auth": { 00:16:54.285 "dhgroup": "ffdhe6144", 00:16:54.285 "digest": "sha384", 00:16:54.285 "state": "completed" 00:16:54.285 }, 00:16:54.285 "cntlid": 85, 00:16:54.285 "listen_address": { 00:16:54.285 "adrfam": "IPv4", 00:16:54.285 "traddr": "10.0.0.2", 00:16:54.285 "trsvcid": "4420", 00:16:54.285 "trtype": "TCP" 00:16:54.285 }, 00:16:54.285 "peer_address": { 00:16:54.285 "adrfam": "IPv4", 00:16:54.285 "traddr": "10.0.0.1", 00:16:54.285 "trsvcid": "39168", 00:16:54.285 "trtype": "TCP" 00:16:54.285 }, 00:16:54.285 "qid": 0, 00:16:54.285 "state": "enabled", 00:16:54.285 "thread": "nvmf_tgt_poll_group_000" 00:16:54.285 } 00:16:54.285 ]' 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.285 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.542 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.542 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.542 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.800 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.367 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.626 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.884 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.142 { 00:16:56.142 "auth": { 00:16:56.142 "dhgroup": "ffdhe6144", 00:16:56.142 "digest": "sha384", 00:16:56.142 "state": "completed" 00:16:56.142 }, 00:16:56.142 "cntlid": 87, 00:16:56.142 "listen_address": { 00:16:56.142 "adrfam": "IPv4", 00:16:56.142 "traddr": "10.0.0.2", 00:16:56.142 "trsvcid": "4420", 00:16:56.142 "trtype": "TCP" 00:16:56.142 }, 00:16:56.142 "peer_address": { 00:16:56.142 "adrfam": "IPv4", 00:16:56.142 "traddr": "10.0.0.1", 00:16:56.142 "trsvcid": "38418", 00:16:56.142 "trtype": "TCP" 00:16:56.142 }, 00:16:56.142 "qid": 0, 00:16:56.142 "state": "enabled", 00:16:56.142 "thread": "nvmf_tgt_poll_group_000" 00:16:56.142 } 00:16:56.142 ]' 00:16:56.142 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.401 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.401 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.401 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.401 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.401 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.401 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.401 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.659 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:16:57.226 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.226 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:57.226 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.226 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.485 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.457 00:16:58.457 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.457 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.458 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.716 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.716 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.716 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.716 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.716 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.716 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.716 { 00:16:58.716 "auth": { 00:16:58.716 "dhgroup": "ffdhe8192", 00:16:58.716 "digest": "sha384", 00:16:58.716 "state": "completed" 00:16:58.716 }, 00:16:58.716 "cntlid": 89, 00:16:58.716 "listen_address": { 00:16:58.716 "adrfam": "IPv4", 00:16:58.716 "traddr": "10.0.0.2", 00:16:58.716 "trsvcid": "4420", 00:16:58.716 "trtype": "TCP" 00:16:58.716 }, 00:16:58.716 "peer_address": { 00:16:58.716 "adrfam": "IPv4", 00:16:58.716 "traddr": "10.0.0.1", 00:16:58.716 "trsvcid": "38438", 00:16:58.716 "trtype": "TCP" 00:16:58.716 }, 00:16:58.716 "qid": 0, 00:16:58.716 "state": "enabled", 00:16:58.716 "thread": "nvmf_tgt_poll_group_000" 00:16:58.716 } 00:16:58.716 ]' 00:16:58.716 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.716 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.716 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.716 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.716 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.716 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.716 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.716 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.972 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.905 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.163 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:00.163 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.163 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.163 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.163 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.163 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.164 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.164 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.164 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.164 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.164 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.164 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.730 00:17:00.987 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.987 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.987 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.987 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.987 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.987 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.987 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.246 { 00:17:01.246 "auth": { 00:17:01.246 "dhgroup": "ffdhe8192", 00:17:01.246 "digest": "sha384", 00:17:01.246 "state": "completed" 00:17:01.246 }, 00:17:01.246 "cntlid": 91, 00:17:01.246 "listen_address": { 00:17:01.246 "adrfam": "IPv4", 00:17:01.246 "traddr": "10.0.0.2", 00:17:01.246 "trsvcid": "4420", 00:17:01.246 "trtype": "TCP" 00:17:01.246 }, 00:17:01.246 "peer_address": { 00:17:01.246 "adrfam": "IPv4", 00:17:01.246 "traddr": "10.0.0.1", 00:17:01.246 "trsvcid": "38464", 00:17:01.246 "trtype": "TCP" 00:17:01.246 }, 00:17:01.246 "qid": 0, 00:17:01.246 "state": "enabled", 00:17:01.246 "thread": "nvmf_tgt_poll_group_000" 00:17:01.246 } 00:17:01.246 ]' 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.246 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.503 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.741 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.446 00:17:03.446 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.446 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.446 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.758 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.758 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.758 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.758 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.758 { 00:17:03.758 "auth": { 00:17:03.758 "dhgroup": "ffdhe8192", 00:17:03.758 "digest": "sha384", 00:17:03.758 "state": "completed" 00:17:03.758 }, 00:17:03.758 "cntlid": 93, 00:17:03.758 "listen_address": { 00:17:03.758 "adrfam": "IPv4", 00:17:03.758 "traddr": "10.0.0.2", 00:17:03.758 "trsvcid": "4420", 00:17:03.758 "trtype": "TCP" 00:17:03.758 }, 00:17:03.758 "peer_address": { 00:17:03.758 "adrfam": "IPv4", 00:17:03.758 "traddr": "10.0.0.1", 00:17:03.758 "trsvcid": "38496", 00:17:03.758 "trtype": "TCP" 00:17:03.758 }, 00:17:03.758 "qid": 0, 00:17:03.758 "state": "enabled", 00:17:03.758 "thread": "nvmf_tgt_poll_group_000" 00:17:03.758 } 00:17:03.758 ]' 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.758 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.016 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.979 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.301 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.901 00:17:05.901 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.901 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.901 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.159 { 00:17:06.159 "auth": { 00:17:06.159 "dhgroup": "ffdhe8192", 00:17:06.159 "digest": "sha384", 00:17:06.159 "state": "completed" 00:17:06.159 }, 00:17:06.159 "cntlid": 95, 00:17:06.159 "listen_address": { 00:17:06.159 "adrfam": "IPv4", 00:17:06.159 "traddr": "10.0.0.2", 00:17:06.159 "trsvcid": "4420", 00:17:06.159 "trtype": "TCP" 00:17:06.159 }, 00:17:06.159 "peer_address": { 00:17:06.159 "adrfam": "IPv4", 00:17:06.159 "traddr": "10.0.0.1", 00:17:06.159 "trsvcid": "38520", 00:17:06.159 "trtype": "TCP" 00:17:06.159 }, 00:17:06.159 "qid": 0, 00:17:06.159 "state": "enabled", 00:17:06.159 "thread": "nvmf_tgt_poll_group_000" 00:17:06.159 } 00:17:06.159 ]' 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.159 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.417 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.352 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.353 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.940 00:17:07.940 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.940 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.940 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.198 { 00:17:08.198 "auth": { 00:17:08.198 "dhgroup": "null", 00:17:08.198 "digest": "sha512", 00:17:08.198 "state": "completed" 00:17:08.198 }, 00:17:08.198 "cntlid": 97, 00:17:08.198 "listen_address": { 00:17:08.198 "adrfam": "IPv4", 00:17:08.198 "traddr": "10.0.0.2", 00:17:08.198 "trsvcid": "4420", 00:17:08.198 "trtype": "TCP" 00:17:08.198 }, 00:17:08.198 "peer_address": { 00:17:08.198 "adrfam": "IPv4", 00:17:08.198 "traddr": "10.0.0.1", 00:17:08.198 "trsvcid": "38758", 00:17:08.198 "trtype": "TCP" 00:17:08.198 }, 00:17:08.198 "qid": 0, 00:17:08.198 "state": "enabled", 00:17:08.198 "thread": "nvmf_tgt_poll_group_000" 00:17:08.198 } 00:17:08.198 ]' 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.198 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.765 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:17:09.329 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.329 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:09.329 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.329 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.329 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.329 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.329 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.330 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.588 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.845 00:17:09.845 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.845 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.845 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.103 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.103 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.103 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.103 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.103 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.103 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.103 { 00:17:10.103 "auth": { 00:17:10.103 "dhgroup": "null", 00:17:10.103 "digest": "sha512", 00:17:10.103 "state": "completed" 00:17:10.103 }, 00:17:10.103 "cntlid": 99, 00:17:10.103 "listen_address": { 00:17:10.103 "adrfam": "IPv4", 00:17:10.103 "traddr": "10.0.0.2", 00:17:10.103 "trsvcid": "4420", 00:17:10.103 "trtype": "TCP" 00:17:10.103 }, 00:17:10.103 "peer_address": { 00:17:10.103 "adrfam": "IPv4", 00:17:10.103 "traddr": "10.0.0.1", 00:17:10.103 "trsvcid": "38776", 00:17:10.103 "trtype": "TCP" 00:17:10.103 }, 00:17:10.103 "qid": 0, 00:17:10.103 "state": "enabled", 00:17:10.103 "thread": "nvmf_tgt_poll_group_000" 00:17:10.103 } 00:17:10.103 ]' 00:17:10.103 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.361 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.361 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.361 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:10.361 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.361 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.361 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.361 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.618 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:11.551 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:11.809 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:11.809 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.809 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.809 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:11.809 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.810 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.810 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.810 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.810 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.810 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.810 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.810 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.067 00:17:12.067 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.067 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.067 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.325 { 00:17:12.325 "auth": { 00:17:12.325 "dhgroup": "null", 00:17:12.325 "digest": "sha512", 00:17:12.325 "state": "completed" 00:17:12.325 }, 00:17:12.325 "cntlid": 101, 00:17:12.325 "listen_address": { 00:17:12.325 "adrfam": "IPv4", 00:17:12.325 "traddr": "10.0.0.2", 00:17:12.325 "trsvcid": "4420", 00:17:12.325 "trtype": "TCP" 00:17:12.325 }, 00:17:12.325 "peer_address": { 00:17:12.325 "adrfam": "IPv4", 00:17:12.325 "traddr": "10.0.0.1", 00:17:12.325 "trsvcid": "38800", 00:17:12.325 "trtype": "TCP" 00:17:12.325 }, 00:17:12.325 "qid": 0, 00:17:12.325 "state": "enabled", 00:17:12.325 "thread": "nvmf_tgt_poll_group_000" 00:17:12.325 } 00:17:12.325 ]' 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.325 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.892 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.460 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.718 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:13.718 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.718 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.718 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.719 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.976 00:17:13.976 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.976 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.976 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.235 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.235 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.235 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.235 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.493 { 00:17:14.493 "auth": { 00:17:14.493 "dhgroup": "null", 00:17:14.493 "digest": "sha512", 00:17:14.493 "state": "completed" 00:17:14.493 }, 00:17:14.493 "cntlid": 103, 00:17:14.493 "listen_address": { 00:17:14.493 "adrfam": "IPv4", 00:17:14.493 "traddr": "10.0.0.2", 00:17:14.493 "trsvcid": "4420", 00:17:14.493 "trtype": "TCP" 00:17:14.493 }, 00:17:14.493 "peer_address": { 00:17:14.493 "adrfam": "IPv4", 00:17:14.493 "traddr": "10.0.0.1", 00:17:14.493 "trsvcid": "38816", 00:17:14.493 "trtype": "TCP" 00:17:14.493 }, 00:17:14.493 "qid": 0, 00:17:14.493 "state": "enabled", 00:17:14.493 "thread": "nvmf_tgt_poll_group_000" 00:17:14.493 } 00:17:14.493 ]' 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.493 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.751 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.683 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.941 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.198 00:17:16.198 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.198 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.198 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.456 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.456 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.456 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.456 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.456 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.456 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.457 { 00:17:16.457 "auth": { 00:17:16.457 "dhgroup": "ffdhe2048", 00:17:16.457 "digest": "sha512", 00:17:16.457 "state": "completed" 00:17:16.457 }, 00:17:16.457 "cntlid": 105, 00:17:16.457 "listen_address": { 00:17:16.457 "adrfam": "IPv4", 00:17:16.457 "traddr": "10.0.0.2", 00:17:16.457 "trsvcid": "4420", 00:17:16.457 "trtype": "TCP" 00:17:16.457 }, 00:17:16.457 "peer_address": { 00:17:16.457 "adrfam": "IPv4", 00:17:16.457 "traddr": "10.0.0.1", 00:17:16.457 "trsvcid": "48784", 00:17:16.457 "trtype": "TCP" 00:17:16.457 }, 00:17:16.457 "qid": 0, 00:17:16.457 "state": "enabled", 00:17:16.457 "thread": "nvmf_tgt_poll_group_000" 00:17:16.457 } 00:17:16.457 ]' 00:17:16.457 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.714 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.714 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.714 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.714 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.714 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.714 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.714 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.971 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:17.904 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.162 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.420 00:17:18.420 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.420 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.420 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.679 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.679 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.679 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.679 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.679 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.679 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.679 { 00:17:18.679 "auth": { 00:17:18.679 "dhgroup": "ffdhe2048", 00:17:18.679 "digest": "sha512", 00:17:18.679 "state": "completed" 00:17:18.679 }, 00:17:18.679 "cntlid": 107, 00:17:18.679 "listen_address": { 00:17:18.679 "adrfam": "IPv4", 00:17:18.679 "traddr": "10.0.0.2", 00:17:18.679 "trsvcid": "4420", 00:17:18.679 "trtype": "TCP" 00:17:18.679 }, 00:17:18.679 "peer_address": { 00:17:18.679 "adrfam": "IPv4", 00:17:18.679 "traddr": "10.0.0.1", 00:17:18.679 "trsvcid": "48814", 00:17:18.679 "trtype": "TCP" 00:17:18.679 }, 00:17:18.679 "qid": 0, 00:17:18.679 "state": "enabled", 00:17:18.679 "thread": "nvmf_tgt_poll_group_000" 00:17:18.679 } 00:17:18.679 ]' 00:17:18.679 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.948 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.948 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.948 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.948 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.948 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.948 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.948 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.206 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:19.771 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.030 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.288 00:17:20.288 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.288 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.288 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.546 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.546 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.546 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.546 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.546 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.826 { 00:17:20.826 "auth": { 00:17:20.826 "dhgroup": "ffdhe2048", 00:17:20.826 "digest": "sha512", 00:17:20.826 "state": "completed" 00:17:20.826 }, 00:17:20.826 "cntlid": 109, 00:17:20.826 "listen_address": { 00:17:20.826 "adrfam": "IPv4", 00:17:20.826 "traddr": "10.0.0.2", 00:17:20.826 "trsvcid": "4420", 00:17:20.826 "trtype": "TCP" 00:17:20.826 }, 00:17:20.826 "peer_address": { 00:17:20.826 "adrfam": "IPv4", 00:17:20.826 "traddr": "10.0.0.1", 00:17:20.826 "trsvcid": "48838", 00:17:20.826 "trtype": "TCP" 00:17:20.826 }, 00:17:20.826 "qid": 0, 00:17:20.826 "state": "enabled", 00:17:20.826 "thread": "nvmf_tgt_poll_group_000" 00:17:20.826 } 00:17:20.826 ]' 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.826 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.084 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.650 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.216 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.474 00:17:22.474 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.474 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.474 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.733 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.733 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.733 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.733 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.733 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.733 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.733 { 00:17:22.733 "auth": { 00:17:22.733 "dhgroup": "ffdhe2048", 00:17:22.733 "digest": "sha512", 00:17:22.733 "state": "completed" 00:17:22.733 }, 00:17:22.733 "cntlid": 111, 00:17:22.733 "listen_address": { 00:17:22.733 "adrfam": "IPv4", 00:17:22.733 "traddr": "10.0.0.2", 00:17:22.733 "trsvcid": "4420", 00:17:22.733 "trtype": "TCP" 00:17:22.733 }, 00:17:22.733 "peer_address": { 00:17:22.733 "adrfam": "IPv4", 00:17:22.733 "traddr": "10.0.0.1", 00:17:22.733 "trsvcid": "48858", 00:17:22.733 "trtype": "TCP" 00:17:22.733 }, 00:17:22.733 "qid": 0, 00:17:22.733 "state": "enabled", 00:17:22.733 "thread": "nvmf_tgt_poll_group_000" 00:17:22.733 } 00:17:22.733 ]' 00:17:22.733 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.733 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.733 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.733 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.733 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.733 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.733 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.733 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.991 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:17:23.558 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.558 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:23.558 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.558 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.558 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.817 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.817 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.817 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.817 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.093 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.094 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.351 00:17:24.351 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.351 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.351 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.608 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.608 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.608 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.608 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.608 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.608 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.608 { 00:17:24.608 "auth": { 00:17:24.608 "dhgroup": "ffdhe3072", 00:17:24.608 "digest": "sha512", 00:17:24.608 "state": "completed" 00:17:24.608 }, 00:17:24.609 "cntlid": 113, 00:17:24.609 "listen_address": { 00:17:24.609 "adrfam": "IPv4", 00:17:24.609 "traddr": "10.0.0.2", 00:17:24.609 "trsvcid": "4420", 00:17:24.609 "trtype": "TCP" 00:17:24.609 }, 00:17:24.609 "peer_address": { 00:17:24.609 "adrfam": "IPv4", 00:17:24.609 "traddr": "10.0.0.1", 00:17:24.609 "trsvcid": "48882", 00:17:24.609 "trtype": "TCP" 00:17:24.609 }, 00:17:24.609 "qid": 0, 00:17:24.609 "state": "enabled", 00:17:24.609 "thread": "nvmf_tgt_poll_group_000" 00:17:24.609 } 00:17:24.609 ]' 00:17:24.609 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.609 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.609 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.609 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.866 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.866 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.866 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.866 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.123 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.687 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.947 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.512 00:17:26.512 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.512 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.512 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.769 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.769 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.769 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.769 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.769 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.769 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.769 { 00:17:26.769 "auth": { 00:17:26.769 "dhgroup": "ffdhe3072", 00:17:26.769 "digest": "sha512", 00:17:26.769 "state": "completed" 00:17:26.769 }, 00:17:26.769 "cntlid": 115, 00:17:26.769 "listen_address": { 00:17:26.769 "adrfam": "IPv4", 00:17:26.769 "traddr": "10.0.0.2", 00:17:26.769 "trsvcid": "4420", 00:17:26.769 "trtype": "TCP" 00:17:26.769 }, 00:17:26.769 "peer_address": { 00:17:26.769 "adrfam": "IPv4", 00:17:26.769 "traddr": "10.0.0.1", 00:17:26.769 "trsvcid": "51976", 00:17:26.769 "trtype": "TCP" 00:17:26.769 }, 00:17:26.769 "qid": 0, 00:17:26.769 "state": "enabled", 00:17:26.769 "thread": "nvmf_tgt_poll_group_000" 00:17:26.769 } 00:17:26.769 ]' 00:17:26.770 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.770 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.770 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.027 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.027 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.027 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.027 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.027 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.284 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.847 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.105 18:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.364 18:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.364 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.364 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.622 00:17:28.622 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.622 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.622 18:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.881 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.881 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.881 18:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.881 18:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.881 18:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.140 { 00:17:29.140 "auth": { 00:17:29.140 "dhgroup": "ffdhe3072", 00:17:29.140 "digest": "sha512", 00:17:29.140 "state": "completed" 00:17:29.140 }, 00:17:29.140 "cntlid": 117, 00:17:29.140 "listen_address": { 00:17:29.140 "adrfam": "IPv4", 00:17:29.140 "traddr": "10.0.0.2", 00:17:29.140 "trsvcid": "4420", 00:17:29.140 "trtype": "TCP" 00:17:29.140 }, 00:17:29.140 "peer_address": { 00:17:29.140 "adrfam": "IPv4", 00:17:29.140 "traddr": "10.0.0.1", 00:17:29.140 "trsvcid": "51998", 00:17:29.140 "trtype": "TCP" 00:17:29.140 }, 00:17:29.140 "qid": 0, 00:17:29.140 "state": "enabled", 00:17:29.140 "thread": "nvmf_tgt_poll_group_000" 00:17:29.140 } 00:17:29.140 ]' 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.140 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.399 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.333 18:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.592 18:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.592 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.592 18:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.851 00:17:30.851 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.851 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.851 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.109 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.109 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.109 18:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.109 18:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 18:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.109 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.109 { 00:17:31.109 "auth": { 00:17:31.109 "dhgroup": "ffdhe3072", 00:17:31.109 "digest": "sha512", 00:17:31.109 "state": "completed" 00:17:31.109 }, 00:17:31.109 "cntlid": 119, 00:17:31.109 "listen_address": { 00:17:31.109 "adrfam": "IPv4", 00:17:31.109 "traddr": "10.0.0.2", 00:17:31.109 "trsvcid": "4420", 00:17:31.109 "trtype": "TCP" 00:17:31.109 }, 00:17:31.109 "peer_address": { 00:17:31.109 "adrfam": "IPv4", 00:17:31.109 "traddr": "10.0.0.1", 00:17:31.109 "trsvcid": "52026", 00:17:31.109 "trtype": "TCP" 00:17:31.109 }, 00:17:31.109 "qid": 0, 00:17:31.109 "state": "enabled", 00:17:31.109 "thread": "nvmf_tgt_poll_group_000" 00:17:31.109 } 00:17:31.109 ]' 00:17:31.109 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.367 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.367 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.367 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.367 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.367 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.367 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.367 18:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.625 18:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.556 18:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.556 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.133 00:17:33.133 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.133 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.133 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.390 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.390 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.390 18:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.390 18:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.390 18:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.390 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.390 { 00:17:33.390 "auth": { 00:17:33.390 "dhgroup": "ffdhe4096", 00:17:33.390 "digest": "sha512", 00:17:33.390 "state": "completed" 00:17:33.390 }, 00:17:33.390 "cntlid": 121, 00:17:33.390 "listen_address": { 00:17:33.390 "adrfam": "IPv4", 00:17:33.390 "traddr": "10.0.0.2", 00:17:33.390 "trsvcid": "4420", 00:17:33.390 "trtype": "TCP" 00:17:33.390 }, 00:17:33.390 "peer_address": { 00:17:33.390 "adrfam": "IPv4", 00:17:33.390 "traddr": "10.0.0.1", 00:17:33.390 "trsvcid": "52038", 00:17:33.390 "trtype": "TCP" 00:17:33.390 }, 00:17:33.390 "qid": 0, 00:17:33.390 "state": "enabled", 00:17:33.390 "thread": "nvmf_tgt_poll_group_000" 00:17:33.390 } 00:17:33.390 ]' 00:17:33.648 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.648 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.648 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.648 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.648 18:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.648 18:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.648 18:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.648 18:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.905 18:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.838 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.095 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.354 00:17:35.354 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.354 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.354 18:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.612 { 00:17:35.612 "auth": { 00:17:35.612 "dhgroup": "ffdhe4096", 00:17:35.612 "digest": "sha512", 00:17:35.612 "state": "completed" 00:17:35.612 }, 00:17:35.612 "cntlid": 123, 00:17:35.612 "listen_address": { 00:17:35.612 "adrfam": "IPv4", 00:17:35.612 "traddr": "10.0.0.2", 00:17:35.612 "trsvcid": "4420", 00:17:35.612 "trtype": "TCP" 00:17:35.612 }, 00:17:35.612 "peer_address": { 00:17:35.612 "adrfam": "IPv4", 00:17:35.612 "traddr": "10.0.0.1", 00:17:35.612 "trsvcid": "52058", 00:17:35.612 "trtype": "TCP" 00:17:35.612 }, 00:17:35.612 "qid": 0, 00:17:35.612 "state": "enabled", 00:17:35.612 "thread": "nvmf_tgt_poll_group_000" 00:17:35.612 } 00:17:35.612 ]' 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.612 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.871 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.871 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.871 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.871 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.871 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.130 18:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:17:36.696 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.696 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:36.696 18:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.696 18:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.955 18:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.955 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.955 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:36.955 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.213 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.471 00:17:37.471 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.471 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.471 18:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.037 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.037 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.037 18:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.037 18:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.037 18:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.038 { 00:17:38.038 "auth": { 00:17:38.038 "dhgroup": "ffdhe4096", 00:17:38.038 "digest": "sha512", 00:17:38.038 "state": "completed" 00:17:38.038 }, 00:17:38.038 "cntlid": 125, 00:17:38.038 "listen_address": { 00:17:38.038 "adrfam": "IPv4", 00:17:38.038 "traddr": "10.0.0.2", 00:17:38.038 "trsvcid": "4420", 00:17:38.038 "trtype": "TCP" 00:17:38.038 }, 00:17:38.038 "peer_address": { 00:17:38.038 "adrfam": "IPv4", 00:17:38.038 "traddr": "10.0.0.1", 00:17:38.038 "trsvcid": "38620", 00:17:38.038 "trtype": "TCP" 00:17:38.038 }, 00:17:38.038 "qid": 0, 00:17:38.038 "state": "enabled", 00:17:38.038 "thread": "nvmf_tgt_poll_group_000" 00:17:38.038 } 00:17:38.038 ]' 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.038 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.297 18:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:38.862 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.121 18:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.378 18:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.378 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.378 18:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.637 00:17:39.637 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.637 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.637 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.894 { 00:17:39.894 "auth": { 00:17:39.894 "dhgroup": "ffdhe4096", 00:17:39.894 "digest": "sha512", 00:17:39.894 "state": "completed" 00:17:39.894 }, 00:17:39.894 "cntlid": 127, 00:17:39.894 "listen_address": { 00:17:39.894 "adrfam": "IPv4", 00:17:39.894 "traddr": "10.0.0.2", 00:17:39.894 "trsvcid": "4420", 00:17:39.894 "trtype": "TCP" 00:17:39.894 }, 00:17:39.894 "peer_address": { 00:17:39.894 "adrfam": "IPv4", 00:17:39.894 "traddr": "10.0.0.1", 00:17:39.894 "trsvcid": "38638", 00:17:39.894 "trtype": "TCP" 00:17:39.894 }, 00:17:39.894 "qid": 0, 00:17:39.894 "state": "enabled", 00:17:39.894 "thread": "nvmf_tgt_poll_group_000" 00:17:39.894 } 00:17:39.894 ]' 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.894 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.152 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.152 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.152 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.152 18:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.087 18:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.656 00:17:41.656 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.656 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.656 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.914 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.914 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.914 18:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.914 18:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.914 18:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.914 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.914 { 00:17:41.914 "auth": { 00:17:41.914 "dhgroup": "ffdhe6144", 00:17:41.914 "digest": "sha512", 00:17:41.914 "state": "completed" 00:17:41.914 }, 00:17:41.914 "cntlid": 129, 00:17:41.914 "listen_address": { 00:17:41.914 "adrfam": "IPv4", 00:17:41.914 "traddr": "10.0.0.2", 00:17:41.914 "trsvcid": "4420", 00:17:41.914 "trtype": "TCP" 00:17:41.914 }, 00:17:41.915 "peer_address": { 00:17:41.915 "adrfam": "IPv4", 00:17:41.915 "traddr": "10.0.0.1", 00:17:41.915 "trsvcid": "38660", 00:17:41.915 "trtype": "TCP" 00:17:41.915 }, 00:17:41.915 "qid": 0, 00:17:41.915 "state": "enabled", 00:17:41.915 "thread": "nvmf_tgt_poll_group_000" 00:17:41.915 } 00:17:41.915 ]' 00:17:41.915 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.915 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.915 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.173 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.173 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.173 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.173 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.174 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.432 18:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:42.999 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.257 18:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.824 00:17:43.824 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.824 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.824 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.083 { 00:17:44.083 "auth": { 00:17:44.083 "dhgroup": "ffdhe6144", 00:17:44.083 "digest": "sha512", 00:17:44.083 "state": "completed" 00:17:44.083 }, 00:17:44.083 "cntlid": 131, 00:17:44.083 "listen_address": { 00:17:44.083 "adrfam": "IPv4", 00:17:44.083 "traddr": "10.0.0.2", 00:17:44.083 "trsvcid": "4420", 00:17:44.083 "trtype": "TCP" 00:17:44.083 }, 00:17:44.083 "peer_address": { 00:17:44.083 "adrfam": "IPv4", 00:17:44.083 "traddr": "10.0.0.1", 00:17:44.083 "trsvcid": "38688", 00:17:44.083 "trtype": "TCP" 00:17:44.083 }, 00:17:44.083 "qid": 0, 00:17:44.083 "state": "enabled", 00:17:44.083 "thread": "nvmf_tgt_poll_group_000" 00:17:44.083 } 00:17:44.083 ]' 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.083 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.650 18:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:45.216 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.475 18:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.040 00:17:46.040 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.040 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.040 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.298 { 00:17:46.298 "auth": { 00:17:46.298 "dhgroup": "ffdhe6144", 00:17:46.298 "digest": "sha512", 00:17:46.298 "state": "completed" 00:17:46.298 }, 00:17:46.298 "cntlid": 133, 00:17:46.298 "listen_address": { 00:17:46.298 "adrfam": "IPv4", 00:17:46.298 "traddr": "10.0.0.2", 00:17:46.298 "trsvcid": "4420", 00:17:46.298 "trtype": "TCP" 00:17:46.298 }, 00:17:46.298 "peer_address": { 00:17:46.298 "adrfam": "IPv4", 00:17:46.298 "traddr": "10.0.0.1", 00:17:46.298 "trsvcid": "49806", 00:17:46.298 "trtype": "TCP" 00:17:46.298 }, 00:17:46.298 "qid": 0, 00:17:46.298 "state": "enabled", 00:17:46.298 "thread": "nvmf_tgt_poll_group_000" 00:17:46.298 } 00:17:46.298 ]' 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.298 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.555 18:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.490 18:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.749 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.331 00:17:48.331 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.331 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.331 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.588 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.588 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.588 18:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.588 18:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.588 18:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.588 18:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.588 { 00:17:48.588 "auth": { 00:17:48.588 "dhgroup": "ffdhe6144", 00:17:48.588 "digest": "sha512", 00:17:48.588 "state": "completed" 00:17:48.588 }, 00:17:48.588 "cntlid": 135, 00:17:48.588 "listen_address": { 00:17:48.588 "adrfam": "IPv4", 00:17:48.588 "traddr": "10.0.0.2", 00:17:48.588 "trsvcid": "4420", 00:17:48.588 "trtype": "TCP" 00:17:48.588 }, 00:17:48.588 "peer_address": { 00:17:48.588 "adrfam": "IPv4", 00:17:48.588 "traddr": "10.0.0.1", 00:17:48.588 "trsvcid": "49844", 00:17:48.588 "trtype": "TCP" 00:17:48.588 }, 00:17:48.588 "qid": 0, 00:17:48.588 "state": "enabled", 00:17:48.588 "thread": "nvmf_tgt_poll_group_000" 00:17:48.588 } 00:17:48.588 ]' 00:17:48.588 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.588 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.588 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.848 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.848 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.848 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.848 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.848 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.106 18:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:50.040 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.298 18:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.864 00:17:50.864 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.864 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.864 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.131 { 00:17:51.131 "auth": { 00:17:51.131 "dhgroup": "ffdhe8192", 00:17:51.131 "digest": "sha512", 00:17:51.131 "state": "completed" 00:17:51.131 }, 00:17:51.131 "cntlid": 137, 00:17:51.131 "listen_address": { 00:17:51.131 "adrfam": "IPv4", 00:17:51.131 "traddr": "10.0.0.2", 00:17:51.131 "trsvcid": "4420", 00:17:51.131 "trtype": "TCP" 00:17:51.131 }, 00:17:51.131 "peer_address": { 00:17:51.131 "adrfam": "IPv4", 00:17:51.131 "traddr": "10.0.0.1", 00:17:51.131 "trsvcid": "49864", 00:17:51.131 "trtype": "TCP" 00:17:51.131 }, 00:17:51.131 "qid": 0, 00:17:51.131 "state": "enabled", 00:17:51.131 "thread": "nvmf_tgt_poll_group_000" 00:17:51.131 } 00:17:51.131 ]' 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.131 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.389 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.389 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.389 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.647 18:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.575 18:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.831 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.403 00:17:53.404 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.404 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.404 18:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.701 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.701 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.701 18:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.701 18:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.701 18:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.701 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.701 { 00:17:53.701 "auth": { 00:17:53.701 "dhgroup": "ffdhe8192", 00:17:53.701 "digest": "sha512", 00:17:53.701 "state": "completed" 00:17:53.701 }, 00:17:53.701 "cntlid": 139, 00:17:53.701 "listen_address": { 00:17:53.701 "adrfam": "IPv4", 00:17:53.701 "traddr": "10.0.0.2", 00:17:53.701 "trsvcid": "4420", 00:17:53.701 "trtype": "TCP" 00:17:53.701 }, 00:17:53.701 "peer_address": { 00:17:53.701 "adrfam": "IPv4", 00:17:53.701 "traddr": "10.0.0.1", 00:17:53.701 "trsvcid": "49900", 00:17:53.701 "trtype": "TCP" 00:17:53.701 }, 00:17:53.701 "qid": 0, 00:17:53.701 "state": "enabled", 00:17:53.701 "thread": "nvmf_tgt_poll_group_000" 00:17:53.701 } 00:17:53.701 ]' 00:17:53.701 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.975 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.975 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.975 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.975 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.975 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.975 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.975 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.232 18:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:01:NGVjMTk0M2IwZWQ5NWNjMmQzY2M3ODRkOTkxZDA5MDAkMeBJ: --dhchap-ctrl-secret DHHC-1:02:YTU0N2E3MTQzNjc2NjJmOTIzYWY1MGU5YjhmMGYxZTRmYmQxOWY2MTY4ZjQyYTA5qCsCSw==: 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.159 18:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.724 00:17:55.724 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.724 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.724 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.982 { 00:17:55.982 "auth": { 00:17:55.982 "dhgroup": "ffdhe8192", 00:17:55.982 "digest": "sha512", 00:17:55.982 "state": "completed" 00:17:55.982 }, 00:17:55.982 "cntlid": 141, 00:17:55.982 "listen_address": { 00:17:55.982 "adrfam": "IPv4", 00:17:55.982 "traddr": "10.0.0.2", 00:17:55.982 "trsvcid": "4420", 00:17:55.982 "trtype": "TCP" 00:17:55.982 }, 00:17:55.982 "peer_address": { 00:17:55.982 "adrfam": "IPv4", 00:17:55.982 "traddr": "10.0.0.1", 00:17:55.982 "trsvcid": "49936", 00:17:55.982 "trtype": "TCP" 00:17:55.982 }, 00:17:55.982 "qid": 0, 00:17:55.982 "state": "enabled", 00:17:55.982 "thread": "nvmf_tgt_poll_group_000" 00:17:55.982 } 00:17:55.982 ]' 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.982 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.240 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.240 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.240 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.240 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.240 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.498 18:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:02:OWIyMDMzMDM5OWVkMWFlYzFkZDQyNjk3MWVjMTc5ZjRmODU4MTA3ODJjN2Y3MDViANeQhg==: --dhchap-ctrl-secret DHHC-1:01:MzE0ZDBmOWE3M2EyMWY1MTBlNDI2M2UwNjhiMmI3MDjskAre: 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.064 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.321 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.322 18:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.888 00:17:58.146 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.146 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.146 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.405 { 00:17:58.405 "auth": { 00:17:58.405 "dhgroup": "ffdhe8192", 00:17:58.405 "digest": "sha512", 00:17:58.405 "state": "completed" 00:17:58.405 }, 00:17:58.405 "cntlid": 143, 00:17:58.405 "listen_address": { 00:17:58.405 "adrfam": "IPv4", 00:17:58.405 "traddr": "10.0.0.2", 00:17:58.405 "trsvcid": "4420", 00:17:58.405 "trtype": "TCP" 00:17:58.405 }, 00:17:58.405 "peer_address": { 00:17:58.405 "adrfam": "IPv4", 00:17:58.405 "traddr": "10.0.0.1", 00:17:58.405 "trsvcid": "52258", 00:17:58.405 "trtype": "TCP" 00:17:58.405 }, 00:17:58.405 "qid": 0, 00:17:58.405 "state": "enabled", 00:17:58.405 "thread": "nvmf_tgt_poll_group_000" 00:17:58.405 } 00:17:58.405 ]' 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.405 18:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.664 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:17:59.644 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:59.645 18:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.645 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.575 00:18:00.575 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.575 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.575 18:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.832 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.832 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.832 18:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.832 18:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.832 18:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.832 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.832 { 00:18:00.832 "auth": { 00:18:00.832 "dhgroup": "ffdhe8192", 00:18:00.832 "digest": "sha512", 00:18:00.832 "state": "completed" 00:18:00.832 }, 00:18:00.832 "cntlid": 145, 00:18:00.832 "listen_address": { 00:18:00.832 "adrfam": "IPv4", 00:18:00.832 "traddr": "10.0.0.2", 00:18:00.832 "trsvcid": "4420", 00:18:00.832 "trtype": "TCP" 00:18:00.832 }, 00:18:00.832 "peer_address": { 00:18:00.832 "adrfam": "IPv4", 00:18:00.832 "traddr": "10.0.0.1", 00:18:00.832 "trsvcid": "52284", 00:18:00.832 "trtype": "TCP" 00:18:00.832 }, 00:18:00.832 "qid": 0, 00:18:00.832 "state": "enabled", 00:18:00.832 "thread": "nvmf_tgt_poll_group_000" 00:18:00.832 } 00:18:00.832 ]' 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.833 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.090 18:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:00:NDZmNzJiMDE0YTY3NDgyM2ZiZmYwYjhmNDU2NmJjNjljZTc1NGIxYzBkMzJhMDljDlxgGA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YmE2ZDk1ZDc5MDE0NjVjOTcwOWQwOTA0OGVlNjY5YTcwNWQ3NGY5Y2Y1MTM4ZmU4ZWU5NzBiOWMzZjkzNvgENHc=: 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:01.657 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:01.658 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:02.224 2024/07/15 18:43:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:02.224 request: 00:18:02.224 { 00:18:02.224 "method": "bdev_nvme_attach_controller", 00:18:02.224 "params": { 00:18:02.224 "name": "nvme0", 00:18:02.224 "trtype": "tcp", 00:18:02.224 "traddr": "10.0.0.2", 00:18:02.224 "adrfam": "ipv4", 00:18:02.224 "trsvcid": "4420", 00:18:02.224 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:02.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08", 00:18:02.224 "prchk_reftag": false, 00:18:02.224 "prchk_guard": false, 00:18:02.224 "hdgst": false, 00:18:02.224 "ddgst": false, 00:18:02.224 "dhchap_key": "key2" 00:18:02.224 } 00:18:02.224 } 00:18:02.224 Got JSON-RPC error response 00:18:02.224 GoRPCClient: error on JSON-RPC call 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.224 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:02.225 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.225 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:02.225 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.225 18:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:02.225 18:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:03.160 2024/07/15 18:43:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:03.160 request: 00:18:03.160 { 00:18:03.160 "method": "bdev_nvme_attach_controller", 00:18:03.160 "params": { 00:18:03.160 "name": "nvme0", 00:18:03.160 "trtype": "tcp", 00:18:03.160 "traddr": "10.0.0.2", 00:18:03.161 "adrfam": "ipv4", 00:18:03.161 "trsvcid": "4420", 00:18:03.161 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:03.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08", 00:18:03.161 "prchk_reftag": false, 00:18:03.161 "prchk_guard": false, 00:18:03.161 "hdgst": false, 00:18:03.161 "ddgst": false, 00:18:03.161 "dhchap_key": "key1", 00:18:03.161 "dhchap_ctrlr_key": "ckey2" 00:18:03.161 } 00:18:03.161 } 00:18:03.161 Got JSON-RPC error response 00:18:03.161 GoRPCClient: error on JSON-RPC call 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key1 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.161 18:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.727 2024/07/15 18:43:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:03.727 request: 00:18:03.727 { 00:18:03.727 "method": "bdev_nvme_attach_controller", 00:18:03.727 "params": { 00:18:03.727 "name": "nvme0", 00:18:03.727 "trtype": "tcp", 00:18:03.727 "traddr": "10.0.0.2", 00:18:03.727 "adrfam": "ipv4", 00:18:03.727 "trsvcid": "4420", 00:18:03.727 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:03.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08", 00:18:03.727 "prchk_reftag": false, 00:18:03.727 "prchk_guard": false, 00:18:03.727 "hdgst": false, 00:18:03.727 "ddgst": false, 00:18:03.727 "dhchap_key": "key1", 00:18:03.727 "dhchap_ctrlr_key": "ckey1" 00:18:03.727 } 00:18:03.727 } 00:18:03.727 Got JSON-RPC error response 00:18:03.727 GoRPCClient: error on JSON-RPC call 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 78331 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78331 ']' 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78331 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78331 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78331' 00:18:03.727 killing process with pid 78331 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78331 00:18:03.727 18:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78331 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=83235 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 83235 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 83235 ']' 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.727 18:43:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 83235 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 83235 ']' 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.096 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.353 18:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.917 00:18:05.917 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.917 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.917 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.180 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.180 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.180 18:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.180 18:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.438 { 00:18:06.438 "auth": { 00:18:06.438 "dhgroup": "ffdhe8192", 00:18:06.438 "digest": "sha512", 00:18:06.438 "state": "completed" 00:18:06.438 }, 00:18:06.438 "cntlid": 1, 00:18:06.438 "listen_address": { 00:18:06.438 "adrfam": "IPv4", 00:18:06.438 "traddr": "10.0.0.2", 00:18:06.438 "trsvcid": "4420", 00:18:06.438 "trtype": "TCP" 00:18:06.438 }, 00:18:06.438 "peer_address": { 00:18:06.438 "adrfam": "IPv4", 00:18:06.438 "traddr": "10.0.0.1", 00:18:06.438 "trsvcid": "52340", 00:18:06.438 "trtype": "TCP" 00:18:06.438 }, 00:18:06.438 "qid": 0, 00:18:06.438 "state": "enabled", 00:18:06.438 "thread": "nvmf_tgt_poll_group_000" 00:18:06.438 } 00:18:06.438 ]' 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.438 18:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.695 18:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid 6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-secret DHHC-1:03:YTg0NzI0YTFkNjI3NzA1ZWY5ODg0ZTcxZjc0MDFiNTBmYTk1OTc3M2E5YjE0MTllYTZlZTE0YTFlZTM1M2YyMZPvnOY=: 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --dhchap-key key3 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:07.627 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.885 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.143 2024/07/15 18:43:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:08.143 request: 00:18:08.143 { 00:18:08.143 "method": "bdev_nvme_attach_controller", 00:18:08.143 "params": { 00:18:08.143 "name": "nvme0", 00:18:08.143 "trtype": "tcp", 00:18:08.143 "traddr": "10.0.0.2", 00:18:08.143 "adrfam": "ipv4", 00:18:08.143 "trsvcid": "4420", 00:18:08.143 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08", 00:18:08.143 "prchk_reftag": false, 00:18:08.143 "prchk_guard": false, 00:18:08.143 "hdgst": false, 00:18:08.143 "ddgst": false, 00:18:08.143 "dhchap_key": "key3" 00:18:08.143 } 00:18:08.143 } 00:18:08.143 Got JSON-RPC error response 00:18:08.143 GoRPCClient: error on JSON-RPC call 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:08.143 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.708 18:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.708 2024/07/15 18:43:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:08.708 request: 00:18:08.708 { 00:18:08.708 "method": "bdev_nvme_attach_controller", 00:18:08.708 "params": { 00:18:08.709 "name": "nvme0", 00:18:08.709 "trtype": "tcp", 00:18:08.709 "traddr": "10.0.0.2", 00:18:08.709 "adrfam": "ipv4", 00:18:08.709 "trsvcid": "4420", 00:18:08.709 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08", 00:18:08.709 "prchk_reftag": false, 00:18:08.709 "prchk_guard": false, 00:18:08.709 "hdgst": false, 00:18:08.709 "ddgst": false, 00:18:08.709 "dhchap_key": "key3" 00:18:08.709 } 00:18:08.709 } 00:18:08.709 Got JSON-RPC error response 00:18:08.709 GoRPCClient: error on JSON-RPC call 00:18:08.709 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:08.709 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.709 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.709 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.709 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:08.709 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:08.965 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:08.965 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.965 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.965 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.222 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.479 2024/07/15 18:43:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:09.479 request: 00:18:09.479 { 00:18:09.479 "method": "bdev_nvme_attach_controller", 00:18:09.479 "params": { 00:18:09.479 "name": "nvme0", 00:18:09.479 "trtype": "tcp", 00:18:09.479 "traddr": "10.0.0.2", 00:18:09.479 "adrfam": "ipv4", 00:18:09.479 "trsvcid": "4420", 00:18:09.479 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:09.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08", 00:18:09.479 "prchk_reftag": false, 00:18:09.479 "prchk_guard": false, 00:18:09.479 "hdgst": false, 00:18:09.479 "ddgst": false, 00:18:09.479 "dhchap_key": "key0", 00:18:09.479 "dhchap_ctrlr_key": "key1" 00:18:09.479 } 00:18:09.479 } 00:18:09.479 Got JSON-RPC error response 00:18:09.479 GoRPCClient: error on JSON-RPC call 00:18:09.479 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:09.479 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:09.479 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:09.479 18:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:09.479 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:09.479 18:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:09.736 00:18:09.736 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:09.736 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.736 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:09.993 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.993 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.993 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78375 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78375 ']' 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78375 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78375 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.559 killing process with pid 78375 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78375' 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78375 00:18:10.559 18:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78375 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.816 rmmod nvme_tcp 00:18:10.816 rmmod nvme_fabrics 00:18:10.816 rmmod nvme_keyring 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 83235 ']' 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 83235 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 83235 ']' 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 83235 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.816 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83235 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83235' 00:18:11.074 killing process with pid 83235 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 83235 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 83235 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.074 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.332 18:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:11.332 18:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.WSC /tmp/spdk.key-sha256.xTX /tmp/spdk.key-sha384.iQM /tmp/spdk.key-sha512.iP5 /tmp/spdk.key-sha512.pYf /tmp/spdk.key-sha384.6C6 /tmp/spdk.key-sha256.Gw3 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:11.332 00:18:11.332 real 2m54.043s 00:18:11.332 user 6m53.750s 00:18:11.332 sys 0m30.861s 00:18:11.332 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.332 18:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.332 ************************************ 00:18:11.332 END TEST nvmf_auth_target 00:18:11.332 ************************************ 00:18:11.332 18:43:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.332 18:43:45 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:11.332 18:43:45 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:11.332 18:43:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:11.332 18:43:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.332 18:43:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.332 ************************************ 00:18:11.332 START TEST nvmf_bdevio_no_huge 00:18:11.332 ************************************ 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:11.332 * Looking for test storage... 00:18:11.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.332 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:11.589 Cannot find device "nvmf_tgt_br" 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.589 Cannot find device "nvmf_tgt_br2" 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:11.589 Cannot find device "nvmf_tgt_br" 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:11.589 Cannot find device "nvmf_tgt_br2" 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.589 18:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.589 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.589 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.589 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.589 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.589 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.589 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:11.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:18:11.850 00:18:11.850 --- 10.0.0.2 ping statistics --- 00:18:11.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.850 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:11.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:11.850 00:18:11.850 --- 10.0.0.3 ping statistics --- 00:18:11.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.850 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:18:11.850 00:18:11.850 --- 10.0.0.1 ping statistics --- 00:18:11.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.850 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83647 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83647 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83647 ']' 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.850 18:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:11.850 [2024-07-15 18:43:46.314283] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:11.850 [2024-07-15 18:43:46.314418] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:12.115 [2024-07-15 18:43:46.477535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:12.372 [2024-07-15 18:43:46.652504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.372 [2024-07-15 18:43:46.652601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.372 [2024-07-15 18:43:46.652621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.372 [2024-07-15 18:43:46.652637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.372 [2024-07-15 18:43:46.652651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.372 [2024-07-15 18:43:46.652858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:12.372 [2024-07-15 18:43:46.653033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:12.372 [2024-07-15 18:43:46.653659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:12.372 [2024-07-15 18:43:46.653664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.937 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.195 [2024-07-15 18:43:47.423167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.195 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.195 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:13.195 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.195 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.195 Malloc0 00:18:13.195 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.196 [2024-07-15 18:43:47.468959] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:13.196 { 00:18:13.196 "params": { 00:18:13.196 "name": "Nvme$subsystem", 00:18:13.196 "trtype": "$TEST_TRANSPORT", 00:18:13.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.196 "adrfam": "ipv4", 00:18:13.196 "trsvcid": "$NVMF_PORT", 00:18:13.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.196 "hdgst": ${hdgst:-false}, 00:18:13.196 "ddgst": ${ddgst:-false} 00:18:13.196 }, 00:18:13.196 "method": "bdev_nvme_attach_controller" 00:18:13.196 } 00:18:13.196 EOF 00:18:13.196 )") 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:13.196 18:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:13.196 "params": { 00:18:13.196 "name": "Nvme1", 00:18:13.196 "trtype": "tcp", 00:18:13.196 "traddr": "10.0.0.2", 00:18:13.196 "adrfam": "ipv4", 00:18:13.196 "trsvcid": "4420", 00:18:13.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.196 "hdgst": false, 00:18:13.196 "ddgst": false 00:18:13.196 }, 00:18:13.196 "method": "bdev_nvme_attach_controller" 00:18:13.196 }' 00:18:13.196 [2024-07-15 18:43:47.521527] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:13.196 [2024-07-15 18:43:47.521636] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83701 ] 00:18:13.196 [2024-07-15 18:43:47.669201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.455 [2024-07-15 18:43:47.857910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.455 [2024-07-15 18:43:47.858044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.455 [2024-07-15 18:43:47.858048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.713 I/O targets: 00:18:13.713 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:13.713 00:18:13.713 00:18:13.713 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.713 http://cunit.sourceforge.net/ 00:18:13.713 00:18:13.713 00:18:13.713 Suite: bdevio tests on: Nvme1n1 00:18:13.713 Test: blockdev write read block ...passed 00:18:13.713 Test: blockdev write zeroes read block ...passed 00:18:13.713 Test: blockdev write zeroes read no split ...passed 00:18:13.972 Test: blockdev write zeroes read split ...passed 00:18:13.972 Test: blockdev write zeroes read split partial ...passed 00:18:13.972 Test: blockdev reset ...[2024-07-15 18:43:48.236231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.972 [2024-07-15 18:43:48.236366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c94460 (9): Bad file descriptor 00:18:13.972 [2024-07-15 18:43:48.247242] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:13.972 passed 00:18:13.972 Test: blockdev write read 8 blocks ...passed 00:18:13.972 Test: blockdev write read size > 128k ...passed 00:18:13.972 Test: blockdev write read invalid size ...passed 00:18:13.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.972 Test: blockdev write read max offset ...passed 00:18:13.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.972 Test: blockdev writev readv 8 blocks ...passed 00:18:13.972 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.972 Test: blockdev writev readv block ...passed 00:18:13.972 Test: blockdev writev readv size > 128k ...passed 00:18:13.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.972 Test: blockdev comparev and writev ...[2024-07-15 18:43:48.423378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.423437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.972 [2024-07-15 18:43:48.423458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.423469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.972 [2024-07-15 18:43:48.423982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.424013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.972 [2024-07-15 18:43:48.424030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.424042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.972 [2024-07-15 18:43:48.424485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.424514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.972 [2024-07-15 18:43:48.424531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.424543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.972 [2024-07-15 18:43:48.424942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.424975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.972 [2024-07-15 18:43:48.424992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.972 [2024-07-15 18:43:48.425003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:14.231 passed 00:18:14.231 Test: blockdev nvme passthru rw ...passed 00:18:14.231 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:43:48.507414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.231 [2024-07-15 18:43:48.507464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:14.231 [2024-07-15 18:43:48.507607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.231 [2024-07-15 18:43:48.507622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:14.231 [2024-07-15 18:43:48.507761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.231 [2024-07-15 18:43:48.507781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:14.231 [2024-07-15 18:43:48.507916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.231 [2024-07-15 18:43:48.507938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:14.231 passed 00:18:14.231 Test: blockdev nvme admin passthru ...passed 00:18:14.231 Test: blockdev copy ...passed 00:18:14.231 00:18:14.231 Run Summary: Type Total Ran Passed Failed Inactive 00:18:14.231 suites 1 1 n/a 0 0 00:18:14.231 tests 23 23 23 0 0 00:18:14.231 asserts 152 152 152 0 n/a 00:18:14.231 00:18:14.231 Elapsed time = 0.963 seconds 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.796 rmmod nvme_tcp 00:18:14.796 rmmod nvme_fabrics 00:18:14.796 rmmod nvme_keyring 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83647 ']' 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83647 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83647 ']' 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83647 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83647 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:14.796 killing process with pid 83647 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83647' 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83647 00:18:14.796 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83647 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:15.361 00:18:15.361 real 0m3.992s 00:18:15.361 user 0m13.888s 00:18:15.361 sys 0m1.724s 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.361 18:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.361 ************************************ 00:18:15.361 END TEST nvmf_bdevio_no_huge 00:18:15.361 ************************************ 00:18:15.361 18:43:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:15.361 18:43:49 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:15.361 18:43:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:15.361 18:43:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.361 18:43:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.361 ************************************ 00:18:15.361 START TEST nvmf_tls 00:18:15.361 ************************************ 00:18:15.361 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:15.361 * Looking for test storage... 00:18:15.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:15.361 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.620 18:43:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:15.621 Cannot find device "nvmf_tgt_br" 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.621 Cannot find device "nvmf_tgt_br2" 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:15.621 Cannot find device "nvmf_tgt_br" 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:15.621 Cannot find device "nvmf_tgt_br2" 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:15.621 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:15.621 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:15.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:18:15.879 00:18:15.879 --- 10.0.0.2 ping statistics --- 00:18:15.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.879 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:15.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:15.879 00:18:15.879 --- 10.0.0.3 ping statistics --- 00:18:15.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.879 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:15.879 00:18:15.879 --- 10.0.0.1 ping statistics --- 00:18:15.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.879 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83898 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83898 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83898 ']' 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.879 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.879 [2024-07-15 18:43:50.316456] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:15.879 [2024-07-15 18:43:50.316563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.137 [2024-07-15 18:43:50.459509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.137 [2024-07-15 18:43:50.616524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.137 [2024-07-15 18:43:50.616597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.137 [2024-07-15 18:43:50.616610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.137 [2024-07-15 18:43:50.616620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.137 [2024-07-15 18:43:50.616628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.137 [2024-07-15 18:43:50.616663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:17.070 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:17.328 true 00:18:17.328 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.328 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:17.587 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:17.587 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:17.587 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:17.845 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.845 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:18.103 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:18.103 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:18.103 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:18.360 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.360 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:18.360 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:18.360 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:18.360 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:18.360 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.616 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:18.616 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:18.616 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:18.874 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.874 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:19.438 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:19.438 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:19.438 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:19.438 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.438 18:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:19.695 18:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.wNmruzjlGL 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.FXoClgbFDh 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.wNmruzjlGL 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.FXoClgbFDh 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:19.964 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:20.526 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.wNmruzjlGL 00:18:20.526 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wNmruzjlGL 00:18:20.526 18:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.527 [2024-07-15 18:43:54.983248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.527 18:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.090 18:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.090 [2024-07-15 18:43:55.483323] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.090 [2024-07-15 18:43:55.483547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.090 18:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.348 malloc0 00:18:21.348 18:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.606 18:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNmruzjlGL 00:18:21.864 [2024-07-15 18:43:56.261001] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:21.865 18:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.wNmruzjlGL 00:18:34.074 Initializing NVMe Controllers 00:18:34.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:34.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:34.074 Initialization complete. Launching workers. 00:18:34.074 ======================================================== 00:18:34.074 Latency(us) 00:18:34.074 Device Information : IOPS MiB/s Average min max 00:18:34.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11895.04 46.46 5381.20 2178.08 9380.85 00:18:34.074 ======================================================== 00:18:34.074 Total : 11895.04 46.46 5381.20 2178.08 9380.85 00:18:34.074 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNmruzjlGL 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wNmruzjlGL' 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84256 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84256 /var/tmp/bdevperf.sock 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84256 ']' 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.074 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.074 [2024-07-15 18:44:06.552030] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:34.074 [2024-07-15 18:44:06.552933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84256 ] 00:18:34.074 [2024-07-15 18:44:06.689695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.074 [2024-07-15 18:44:06.824717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.074 18:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.074 18:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:34.074 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNmruzjlGL 00:18:34.074 [2024-07-15 18:44:08.096749] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.074 [2024-07-15 18:44:08.096869] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:34.074 TLSTESTn1 00:18:34.074 18:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:34.074 Running I/O for 10 seconds... 00:18:44.117 00:18:44.117 Latency(us) 00:18:44.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.117 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:44.117 Verification LBA range: start 0x0 length 0x2000 00:18:44.117 TLSTESTn1 : 10.01 4623.41 18.06 0.00 0.00 27638.42 5742.20 33953.89 00:18:44.117 =================================================================================================================== 00:18:44.117 Total : 4623.41 18.06 0.00 0.00 27638.42 5742.20 33953.89 00:18:44.117 0 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84256 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84256 ']' 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84256 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84256 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:44.117 killing process with pid 84256 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84256' 00:18:44.117 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.117 00:18:44.117 Latency(us) 00:18:44.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.117 =================================================================================================================== 00:18:44.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.117 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84256 00:18:44.118 [2024-07-15 18:44:18.444273] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:44.118 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84256 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FXoClgbFDh 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FXoClgbFDh 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FXoClgbFDh 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FXoClgbFDh' 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84408 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84408 /var/tmp/bdevperf.sock 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84408 ']' 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.375 18:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.375 [2024-07-15 18:44:18.698004] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:44.375 [2024-07-15 18:44:18.698094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84408 ] 00:18:44.375 [2024-07-15 18:44:18.830420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.633 [2024-07-15 18:44:18.934815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FXoClgbFDh 00:18:45.566 [2024-07-15 18:44:19.906966] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.566 [2024-07-15 18:44:19.907092] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:45.566 [2024-07-15 18:44:19.915272] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.566 [2024-07-15 18:44:19.915505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b5ca0 (107): Transport endpoint is not connected 00:18:45.566 [2024-07-15 18:44:19.916493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b5ca0 (9): Bad file descriptor 00:18:45.566 [2024-07-15 18:44:19.917490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:45.566 [2024-07-15 18:44:19.917514] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.566 [2024-07-15 18:44:19.917538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:45.566 2024/07/15 18:44:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.FXoClgbFDh subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:45.566 request: 00:18:45.566 { 00:18:45.566 "method": "bdev_nvme_attach_controller", 00:18:45.566 "params": { 00:18:45.566 "name": "TLSTEST", 00:18:45.566 "trtype": "tcp", 00:18:45.566 "traddr": "10.0.0.2", 00:18:45.566 "adrfam": "ipv4", 00:18:45.566 "trsvcid": "4420", 00:18:45.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.566 "prchk_reftag": false, 00:18:45.566 "prchk_guard": false, 00:18:45.566 "hdgst": false, 00:18:45.566 "ddgst": false, 00:18:45.566 "psk": "/tmp/tmp.FXoClgbFDh" 00:18:45.566 } 00:18:45.566 } 00:18:45.566 Got JSON-RPC error response 00:18:45.566 GoRPCClient: error on JSON-RPC call 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84408 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84408 ']' 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84408 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84408 00:18:45.566 killing process with pid 84408 00:18:45.566 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.566 00:18:45.566 Latency(us) 00:18:45.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.566 =================================================================================================================== 00:18:45.566 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84408' 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84408 00:18:45.566 [2024-07-15 18:44:19.970219] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:45.566 18:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84408 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wNmruzjlGL 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wNmruzjlGL 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wNmruzjlGL 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wNmruzjlGL' 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84453 00:18:45.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84453 /var/tmp/bdevperf.sock 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84453 ']' 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.825 18:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.825 [2024-07-15 18:44:20.209593] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:45.825 [2024-07-15 18:44:20.209687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84453 ] 00:18:46.083 [2024-07-15 18:44:20.346511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.083 [2024-07-15 18:44:20.450514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.wNmruzjlGL 00:18:47.017 [2024-07-15 18:44:21.426682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.017 [2024-07-15 18:44:21.426805] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:47.017 [2024-07-15 18:44:21.435567] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:47.017 [2024-07-15 18:44:21.435614] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:47.017 [2024-07-15 18:44:21.435673] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:47.017 [2024-07-15 18:44:21.436233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86eca0 (107): Transport endpoint is not connected 00:18:47.017 [2024-07-15 18:44:21.437221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86eca0 (9): Bad file descriptor 00:18:47.017 [2024-07-15 18:44:21.438219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:47.017 [2024-07-15 18:44:21.438242] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:47.017 [2024-07-15 18:44:21.438257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:47.017 2024/07/15 18:44:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.wNmruzjlGL subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:47.017 request: 00:18:47.017 { 00:18:47.017 "method": "bdev_nvme_attach_controller", 00:18:47.017 "params": { 00:18:47.017 "name": "TLSTEST", 00:18:47.017 "trtype": "tcp", 00:18:47.017 "traddr": "10.0.0.2", 00:18:47.017 "adrfam": "ipv4", 00:18:47.017 "trsvcid": "4420", 00:18:47.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.017 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:47.017 "prchk_reftag": false, 00:18:47.017 "prchk_guard": false, 00:18:47.017 "hdgst": false, 00:18:47.017 "ddgst": false, 00:18:47.017 "psk": "/tmp/tmp.wNmruzjlGL" 00:18:47.017 } 00:18:47.017 } 00:18:47.017 Got JSON-RPC error response 00:18:47.017 GoRPCClient: error on JSON-RPC call 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84453 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84453 ']' 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84453 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84453 00:18:47.017 killing process with pid 84453 00:18:47.017 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.017 00:18:47.017 Latency(us) 00:18:47.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.017 =================================================================================================================== 00:18:47.017 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84453' 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84453 00:18:47.017 [2024-07-15 18:44:21.490854] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:47.017 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84453 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNmruzjlGL 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNmruzjlGL 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:47.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wNmruzjlGL 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wNmruzjlGL' 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84499 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84499 /var/tmp/bdevperf.sock 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84499 ']' 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.276 18:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.276 [2024-07-15 18:44:21.735058] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:47.276 [2024-07-15 18:44:21.735565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84499 ] 00:18:47.534 [2024-07-15 18:44:21.873195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.534 [2024-07-15 18:44:21.978115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.469 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.469 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:48.469 18:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNmruzjlGL 00:18:48.469 [2024-07-15 18:44:22.938958] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.469 [2024-07-15 18:44:22.939074] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:48.469 [2024-07-15 18:44:22.945796] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:48.469 [2024-07-15 18:44:22.945844] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:48.469 [2024-07-15 18:44:22.945902] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:48.469 [2024-07-15 18:44:22.946476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100aca0 (107): Transport endpoint is not connected 00:18:48.469 [2024-07-15 18:44:22.947463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100aca0 (9): Bad file descriptor 00:18:48.469 [2024-07-15 18:44:22.948461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:48.469 [2024-07-15 18:44:22.948483] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:48.469 [2024-07-15 18:44:22.948497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:48.469 2024/07/15 18:44:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.wNmruzjlGL subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:48.728 request: 00:18:48.728 { 00:18:48.728 "method": "bdev_nvme_attach_controller", 00:18:48.728 "params": { 00:18:48.728 "name": "TLSTEST", 00:18:48.728 "trtype": "tcp", 00:18:48.728 "traddr": "10.0.0.2", 00:18:48.728 "adrfam": "ipv4", 00:18:48.728 "trsvcid": "4420", 00:18:48.728 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:48.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.728 "prchk_reftag": false, 00:18:48.728 "prchk_guard": false, 00:18:48.728 "hdgst": false, 00:18:48.728 "ddgst": false, 00:18:48.728 "psk": "/tmp/tmp.wNmruzjlGL" 00:18:48.728 } 00:18:48.728 } 00:18:48.728 Got JSON-RPC error response 00:18:48.728 GoRPCClient: error on JSON-RPC call 00:18:48.728 18:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84499 00:18:48.728 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84499 ']' 00:18:48.728 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84499 00:18:48.728 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:48.728 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.728 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84499 00:18:48.728 killing process with pid 84499 00:18:48.728 Received shutdown signal, test time was about 10.000000 seconds 00:18:48.728 00:18:48.728 Latency(us) 00:18:48.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.728 =================================================================================================================== 00:18:48.728 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84499' 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84499 00:18:48.728 [2024-07-15 18:44:23.008547] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84499 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84539 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84539 /var/tmp/bdevperf.sock 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84539 ']' 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.728 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.991 [2024-07-15 18:44:23.265762] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:48.991 [2024-07-15 18:44:23.265880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84539 ] 00:18:48.991 [2024-07-15 18:44:23.410594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.253 [2024-07-15 18:44:23.514203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.820 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.820 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:49.820 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:50.078 [2024-07-15 18:44:24.426086] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:50.078 [2024-07-15 18:44:24.427918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154b240 (9): Bad file descriptor 00:18:50.078 [2024-07-15 18:44:24.428912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:50.078 [2024-07-15 18:44:24.428936] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:50.078 [2024-07-15 18:44:24.428956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.078 2024/07/15 18:44:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:50.078 request: 00:18:50.078 { 00:18:50.078 "method": "bdev_nvme_attach_controller", 00:18:50.078 "params": { 00:18:50.078 "name": "TLSTEST", 00:18:50.078 "trtype": "tcp", 00:18:50.078 "traddr": "10.0.0.2", 00:18:50.078 "adrfam": "ipv4", 00:18:50.078 "trsvcid": "4420", 00:18:50.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.078 "prchk_reftag": false, 00:18:50.078 "prchk_guard": false, 00:18:50.078 "hdgst": false, 00:18:50.078 "ddgst": false 00:18:50.078 } 00:18:50.078 } 00:18:50.078 Got JSON-RPC error response 00:18:50.079 GoRPCClient: error on JSON-RPC call 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84539 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84539 ']' 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84539 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84539 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:50.079 killing process with pid 84539 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84539' 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84539 00:18:50.079 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.079 00:18:50.079 Latency(us) 00:18:50.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.079 =================================================================================================================== 00:18:50.079 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.079 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84539 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83898 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83898 ']' 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83898 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83898 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:50.337 killing process with pid 83898 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83898' 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83898 00:18:50.337 [2024-07-15 18:44:24.697761] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:50.337 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83898 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ub1ZwOYebO 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ub1ZwOYebO 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84600 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84600 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84600 ']' 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.595 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.595 [2024-07-15 18:44:25.041931] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:50.595 [2024-07-15 18:44:25.042062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.864 [2024-07-15 18:44:25.180719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.864 [2024-07-15 18:44:25.291563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.864 [2024-07-15 18:44:25.291617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.864 [2024-07-15 18:44:25.291627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.864 [2024-07-15 18:44:25.291636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.864 [2024-07-15 18:44:25.291644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.864 [2024-07-15 18:44:25.291677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ub1ZwOYebO 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ub1ZwOYebO 00:18:51.799 18:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:51.799 [2024-07-15 18:44:26.203664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.799 18:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:52.057 18:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:52.315 [2024-07-15 18:44:26.703755] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.316 [2024-07-15 18:44:26.703982] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.316 18:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:52.574 malloc0 00:18:52.574 18:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.832 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ub1ZwOYebO 00:18:53.090 [2024-07-15 18:44:27.337367] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ub1ZwOYebO 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ub1ZwOYebO' 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84697 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84697 /var/tmp/bdevperf.sock 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84697 ']' 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.090 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.090 [2024-07-15 18:44:27.407661] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:53.090 [2024-07-15 18:44:27.407744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84697 ] 00:18:53.090 [2024-07-15 18:44:27.543515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.348 [2024-07-15 18:44:27.648834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.912 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.912 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.912 18:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ub1ZwOYebO 00:18:54.169 [2024-07-15 18:44:28.545810] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.169 [2024-07-15 18:44:28.545952] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:54.169 TLSTESTn1 00:18:54.169 18:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:54.427 Running I/O for 10 seconds... 00:19:04.395 00:19:04.395 Latency(us) 00:19:04.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.395 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:04.395 Verification LBA range: start 0x0 length 0x2000 00:19:04.395 TLSTESTn1 : 10.01 4687.16 18.31 0.00 0.00 27262.47 5149.26 21970.16 00:19:04.395 =================================================================================================================== 00:19:04.395 Total : 4687.16 18.31 0.00 0.00 27262.47 5149.26 21970.16 00:19:04.395 0 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84697 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84697 ']' 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84697 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84697 00:19:04.395 killing process with pid 84697 00:19:04.395 Received shutdown signal, test time was about 10.000000 seconds 00:19:04.395 00:19:04.395 Latency(us) 00:19:04.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.395 =================================================================================================================== 00:19:04.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84697' 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84697 00:19:04.395 [2024-07-15 18:44:38.795906] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:04.395 18:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84697 00:19:04.652 18:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ub1ZwOYebO 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ub1ZwOYebO 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ub1ZwOYebO 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ub1ZwOYebO 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ub1ZwOYebO' 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84851 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84851 /var/tmp/bdevperf.sock 00:19:04.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84851 ']' 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.653 18:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.653 [2024-07-15 18:44:39.064416] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:04.653 [2024-07-15 18:44:39.064535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84851 ] 00:19:04.910 [2024-07-15 18:44:39.206198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.910 [2024-07-15 18:44:39.313621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.839 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.839 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:05.840 18:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ub1ZwOYebO 00:19:05.840 [2024-07-15 18:44:40.291954] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.840 [2024-07-15 18:44:40.292040] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:05.840 [2024-07-15 18:44:40.292051] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ub1ZwOYebO 00:19:05.840 2024/07/15 18:44:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.ub1ZwOYebO subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:19:05.840 request: 00:19:05.840 { 00:19:05.840 "method": "bdev_nvme_attach_controller", 00:19:05.840 "params": { 00:19:05.840 "name": "TLSTEST", 00:19:05.840 "trtype": "tcp", 00:19:05.840 "traddr": "10.0.0.2", 00:19:05.840 "adrfam": "ipv4", 00:19:05.840 "trsvcid": "4420", 00:19:05.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.840 "prchk_reftag": false, 00:19:05.840 "prchk_guard": false, 00:19:05.840 "hdgst": false, 00:19:05.840 "ddgst": false, 00:19:05.840 "psk": "/tmp/tmp.ub1ZwOYebO" 00:19:05.840 } 00:19:05.840 } 00:19:05.840 Got JSON-RPC error response 00:19:05.840 GoRPCClient: error on JSON-RPC call 00:19:05.840 18:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84851 00:19:05.840 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84851 ']' 00:19:05.840 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84851 00:19:05.840 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84851 00:19:06.131 killing process with pid 84851 00:19:06.131 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.131 00:19:06.131 Latency(us) 00:19:06.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.131 =================================================================================================================== 00:19:06.131 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84851' 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84851 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84851 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84600 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84600 ']' 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84600 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84600 00:19:06.131 killing process with pid 84600 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84600' 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84600 00:19:06.131 [2024-07-15 18:44:40.558881] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:06.131 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84600 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84903 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84903 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84903 ']' 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.389 18:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.389 [2024-07-15 18:44:40.831246] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:06.389 [2024-07-15 18:44:40.831339] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.647 [2024-07-15 18:44:40.966237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.647 [2024-07-15 18:44:41.078074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.647 [2024-07-15 18:44:41.078131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.647 [2024-07-15 18:44:41.078143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.647 [2024-07-15 18:44:41.078153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.647 [2024-07-15 18:44:41.078161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.647 [2024-07-15 18:44:41.078190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ub1ZwOYebO 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ub1ZwOYebO 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ub1ZwOYebO 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ub1ZwOYebO 00:19:07.583 18:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:07.840 [2024-07-15 18:44:42.127877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.840 18:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:08.098 18:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:08.355 [2024-07-15 18:44:42.651992] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.355 [2024-07-15 18:44:42.652219] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.355 18:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:08.613 malloc0 00:19:08.613 18:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:08.872 18:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ub1ZwOYebO 00:19:09.130 [2024-07-15 18:44:43.481785] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:09.130 [2024-07-15 18:44:43.481836] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:09.130 [2024-07-15 18:44:43.481871] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:09.130 2024/07/15 18:44:43 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.ub1ZwOYebO], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:19:09.130 request: 00:19:09.130 { 00:19:09.130 "method": "nvmf_subsystem_add_host", 00:19:09.130 "params": { 00:19:09.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.130 "host": "nqn.2016-06.io.spdk:host1", 00:19:09.130 "psk": "/tmp/tmp.ub1ZwOYebO" 00:19:09.130 } 00:19:09.130 } 00:19:09.130 Got JSON-RPC error response 00:19:09.130 GoRPCClient: error on JSON-RPC call 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84903 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84903 ']' 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84903 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84903 00:19:09.130 killing process with pid 84903 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84903' 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84903 00:19:09.130 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84903 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ub1ZwOYebO 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85013 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85013 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85013 ']' 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.387 18:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.387 [2024-07-15 18:44:43.819715] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:09.387 [2024-07-15 18:44:43.819844] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.644 [2024-07-15 18:44:43.966398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.644 [2024-07-15 18:44:44.072871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.644 [2024-07-15 18:44:44.072931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.644 [2024-07-15 18:44:44.072943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.644 [2024-07-15 18:44:44.072962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.645 [2024-07-15 18:44:44.072970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.645 [2024-07-15 18:44:44.073002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ub1ZwOYebO 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ub1ZwOYebO 00:19:10.579 18:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:10.836 [2024-07-15 18:44:45.066409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.836 18:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.094 18:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.352 [2024-07-15 18:44:45.614540] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.352 [2024-07-15 18:44:45.614925] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.352 18:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:11.609 malloc0 00:19:11.609 18:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:11.609 18:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ub1ZwOYebO 00:19:11.867 [2024-07-15 18:44:46.332925] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:12.124 18:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=85116 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 85116 /var/tmp/bdevperf.sock 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85116 ']' 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.125 18:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.125 [2024-07-15 18:44:46.416886] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:12.125 [2024-07-15 18:44:46.417036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85116 ] 00:19:12.125 [2024-07-15 18:44:46.563759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.382 [2024-07-15 18:44:46.684330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.948 18:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.948 18:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:12.948 18:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ub1ZwOYebO 00:19:13.205 [2024-07-15 18:44:47.625035] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.205 [2024-07-15 18:44:47.625158] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:13.461 TLSTESTn1 00:19:13.461 18:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:13.718 18:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:13.718 "subsystems": [ 00:19:13.718 { 00:19:13.718 "subsystem": "keyring", 00:19:13.718 "config": [] 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "subsystem": "iobuf", 00:19:13.718 "config": [ 00:19:13.718 { 00:19:13.718 "method": "iobuf_set_options", 00:19:13.718 "params": { 00:19:13.718 "large_bufsize": 135168, 00:19:13.718 "large_pool_count": 1024, 00:19:13.718 "small_bufsize": 8192, 00:19:13.718 "small_pool_count": 8192 00:19:13.718 } 00:19:13.718 } 00:19:13.718 ] 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "subsystem": "sock", 00:19:13.718 "config": [ 00:19:13.718 { 00:19:13.718 "method": "sock_set_default_impl", 00:19:13.718 "params": { 00:19:13.718 "impl_name": "posix" 00:19:13.718 } 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "method": "sock_impl_set_options", 00:19:13.718 "params": { 00:19:13.718 "enable_ktls": false, 00:19:13.718 "enable_placement_id": 0, 00:19:13.718 "enable_quickack": false, 00:19:13.718 "enable_recv_pipe": true, 00:19:13.718 "enable_zerocopy_send_client": false, 00:19:13.718 "enable_zerocopy_send_server": true, 00:19:13.718 "impl_name": "ssl", 00:19:13.718 "recv_buf_size": 4096, 00:19:13.718 "send_buf_size": 4096, 00:19:13.718 "tls_version": 0, 00:19:13.718 "zerocopy_threshold": 0 00:19:13.718 } 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "method": "sock_impl_set_options", 00:19:13.718 "params": { 00:19:13.718 "enable_ktls": false, 00:19:13.718 "enable_placement_id": 0, 00:19:13.718 "enable_quickack": false, 00:19:13.718 "enable_recv_pipe": true, 00:19:13.718 "enable_zerocopy_send_client": false, 00:19:13.718 "enable_zerocopy_send_server": true, 00:19:13.718 "impl_name": "posix", 00:19:13.718 "recv_buf_size": 2097152, 00:19:13.718 "send_buf_size": 2097152, 00:19:13.718 "tls_version": 0, 00:19:13.718 "zerocopy_threshold": 0 00:19:13.718 } 00:19:13.718 } 00:19:13.718 ] 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "subsystem": "vmd", 00:19:13.718 "config": [] 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "subsystem": "accel", 00:19:13.718 "config": [ 00:19:13.718 { 00:19:13.718 "method": "accel_set_options", 00:19:13.718 "params": { 00:19:13.718 "buf_count": 2048, 00:19:13.718 "large_cache_size": 16, 00:19:13.718 "sequence_count": 2048, 00:19:13.718 "small_cache_size": 128, 00:19:13.718 "task_count": 2048 00:19:13.718 } 00:19:13.718 } 00:19:13.718 ] 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "subsystem": "bdev", 00:19:13.718 "config": [ 00:19:13.718 { 00:19:13.718 "method": "bdev_set_options", 00:19:13.718 "params": { 00:19:13.718 "bdev_auto_examine": true, 00:19:13.718 "bdev_io_cache_size": 256, 00:19:13.718 "bdev_io_pool_size": 65535, 00:19:13.718 "iobuf_large_cache_size": 16, 00:19:13.718 "iobuf_small_cache_size": 128 00:19:13.718 } 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "method": "bdev_raid_set_options", 00:19:13.718 "params": { 00:19:13.718 "process_window_size_kb": 1024 00:19:13.718 } 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "method": "bdev_iscsi_set_options", 00:19:13.718 "params": { 00:19:13.718 "timeout_sec": 30 00:19:13.718 } 00:19:13.718 }, 00:19:13.718 { 00:19:13.718 "method": "bdev_nvme_set_options", 00:19:13.718 "params": { 00:19:13.718 "action_on_timeout": "none", 00:19:13.718 "allow_accel_sequence": false, 00:19:13.718 "arbitration_burst": 0, 00:19:13.718 "bdev_retry_count": 3, 00:19:13.718 "ctrlr_loss_timeout_sec": 0, 00:19:13.718 "delay_cmd_submit": true, 00:19:13.718 "dhchap_dhgroups": [ 00:19:13.718 "null", 00:19:13.718 "ffdhe2048", 00:19:13.718 "ffdhe3072", 00:19:13.718 "ffdhe4096", 00:19:13.718 "ffdhe6144", 00:19:13.718 "ffdhe8192" 00:19:13.718 ], 00:19:13.718 "dhchap_digests": [ 00:19:13.718 "sha256", 00:19:13.718 "sha384", 00:19:13.718 "sha512" 00:19:13.718 ], 00:19:13.718 "disable_auto_failback": false, 00:19:13.718 "fast_io_fail_timeout_sec": 0, 00:19:13.718 "generate_uuids": false, 00:19:13.718 "high_priority_weight": 0, 00:19:13.718 "io_path_stat": false, 00:19:13.718 "io_queue_requests": 0, 00:19:13.719 "keep_alive_timeout_ms": 10000, 00:19:13.719 "low_priority_weight": 0, 00:19:13.719 "medium_priority_weight": 0, 00:19:13.719 "nvme_adminq_poll_period_us": 10000, 00:19:13.719 "nvme_error_stat": false, 00:19:13.719 "nvme_ioq_poll_period_us": 0, 00:19:13.719 "rdma_cm_event_timeout_ms": 0, 00:19:13.719 "rdma_max_cq_size": 0, 00:19:13.719 "rdma_srq_size": 0, 00:19:13.719 "reconnect_delay_sec": 0, 00:19:13.719 "timeout_admin_us": 0, 00:19:13.719 "timeout_us": 0, 00:19:13.719 "transport_ack_timeout": 0, 00:19:13.719 "transport_retry_count": 4, 00:19:13.719 "transport_tos": 0 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "bdev_nvme_set_hotplug", 00:19:13.719 "params": { 00:19:13.719 "enable": false, 00:19:13.719 "period_us": 100000 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "bdev_malloc_create", 00:19:13.719 "params": { 00:19:13.719 "block_size": 4096, 00:19:13.719 "name": "malloc0", 00:19:13.719 "num_blocks": 8192, 00:19:13.719 "optimal_io_boundary": 0, 00:19:13.719 "physical_block_size": 4096, 00:19:13.719 "uuid": "774ceefc-609e-4b33-91c2-18da9b342000" 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "bdev_wait_for_examine" 00:19:13.719 } 00:19:13.719 ] 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "subsystem": "nbd", 00:19:13.719 "config": [] 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "subsystem": "scheduler", 00:19:13.719 "config": [ 00:19:13.719 { 00:19:13.719 "method": "framework_set_scheduler", 00:19:13.719 "params": { 00:19:13.719 "name": "static" 00:19:13.719 } 00:19:13.719 } 00:19:13.719 ] 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "subsystem": "nvmf", 00:19:13.719 "config": [ 00:19:13.719 { 00:19:13.719 "method": "nvmf_set_config", 00:19:13.719 "params": { 00:19:13.719 "admin_cmd_passthru": { 00:19:13.719 "identify_ctrlr": false 00:19:13.719 }, 00:19:13.719 "discovery_filter": "match_any" 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "nvmf_set_max_subsystems", 00:19:13.719 "params": { 00:19:13.719 "max_subsystems": 1024 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "nvmf_set_crdt", 00:19:13.719 "params": { 00:19:13.719 "crdt1": 0, 00:19:13.719 "crdt2": 0, 00:19:13.719 "crdt3": 0 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "nvmf_create_transport", 00:19:13.719 "params": { 00:19:13.719 "abort_timeout_sec": 1, 00:19:13.719 "ack_timeout": 0, 00:19:13.719 "buf_cache_size": 4294967295, 00:19:13.719 "c2h_success": false, 00:19:13.719 "data_wr_pool_size": 0, 00:19:13.719 "dif_insert_or_strip": false, 00:19:13.719 "in_capsule_data_size": 4096, 00:19:13.719 "io_unit_size": 131072, 00:19:13.719 "max_aq_depth": 128, 00:19:13.719 "max_io_qpairs_per_ctrlr": 127, 00:19:13.719 "max_io_size": 131072, 00:19:13.719 "max_queue_depth": 128, 00:19:13.719 "num_shared_buffers": 511, 00:19:13.719 "sock_priority": 0, 00:19:13.719 "trtype": "TCP", 00:19:13.719 "zcopy": false 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "nvmf_create_subsystem", 00:19:13.719 "params": { 00:19:13.719 "allow_any_host": false, 00:19:13.719 "ana_reporting": false, 00:19:13.719 "max_cntlid": 65519, 00:19:13.719 "max_namespaces": 10, 00:19:13.719 "min_cntlid": 1, 00:19:13.719 "model_number": "SPDK bdev Controller", 00:19:13.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.719 "serial_number": "SPDK00000000000001" 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "nvmf_subsystem_add_host", 00:19:13.719 "params": { 00:19:13.719 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.719 "psk": "/tmp/tmp.ub1ZwOYebO" 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "nvmf_subsystem_add_ns", 00:19:13.719 "params": { 00:19:13.719 "namespace": { 00:19:13.719 "bdev_name": "malloc0", 00:19:13.719 "nguid": "774CEEFC609E4B3391C218DA9B342000", 00:19:13.719 "no_auto_visible": false, 00:19:13.719 "nsid": 1, 00:19:13.719 "uuid": "774ceefc-609e-4b33-91c2-18da9b342000" 00:19:13.719 }, 00:19:13.719 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:13.719 } 00:19:13.719 }, 00:19:13.719 { 00:19:13.719 "method": "nvmf_subsystem_add_listener", 00:19:13.719 "params": { 00:19:13.719 "listen_address": { 00:19:13.719 "adrfam": "IPv4", 00:19:13.719 "traddr": "10.0.0.2", 00:19:13.719 "trsvcid": "4420", 00:19:13.719 "trtype": "TCP" 00:19:13.719 }, 00:19:13.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.719 "secure_channel": true 00:19:13.719 } 00:19:13.719 } 00:19:13.719 ] 00:19:13.719 } 00:19:13.719 ] 00:19:13.719 }' 00:19:13.719 18:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:14.285 18:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:14.285 "subsystems": [ 00:19:14.285 { 00:19:14.285 "subsystem": "keyring", 00:19:14.285 "config": [] 00:19:14.285 }, 00:19:14.285 { 00:19:14.285 "subsystem": "iobuf", 00:19:14.285 "config": [ 00:19:14.285 { 00:19:14.285 "method": "iobuf_set_options", 00:19:14.285 "params": { 00:19:14.285 "large_bufsize": 135168, 00:19:14.285 "large_pool_count": 1024, 00:19:14.285 "small_bufsize": 8192, 00:19:14.285 "small_pool_count": 8192 00:19:14.285 } 00:19:14.285 } 00:19:14.285 ] 00:19:14.285 }, 00:19:14.285 { 00:19:14.285 "subsystem": "sock", 00:19:14.285 "config": [ 00:19:14.285 { 00:19:14.285 "method": "sock_set_default_impl", 00:19:14.285 "params": { 00:19:14.285 "impl_name": "posix" 00:19:14.285 } 00:19:14.285 }, 00:19:14.285 { 00:19:14.286 "method": "sock_impl_set_options", 00:19:14.286 "params": { 00:19:14.286 "enable_ktls": false, 00:19:14.286 "enable_placement_id": 0, 00:19:14.286 "enable_quickack": false, 00:19:14.286 "enable_recv_pipe": true, 00:19:14.286 "enable_zerocopy_send_client": false, 00:19:14.286 "enable_zerocopy_send_server": true, 00:19:14.286 "impl_name": "ssl", 00:19:14.286 "recv_buf_size": 4096, 00:19:14.286 "send_buf_size": 4096, 00:19:14.286 "tls_version": 0, 00:19:14.286 "zerocopy_threshold": 0 00:19:14.286 } 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "method": "sock_impl_set_options", 00:19:14.286 "params": { 00:19:14.286 "enable_ktls": false, 00:19:14.286 "enable_placement_id": 0, 00:19:14.286 "enable_quickack": false, 00:19:14.286 "enable_recv_pipe": true, 00:19:14.286 "enable_zerocopy_send_client": false, 00:19:14.286 "enable_zerocopy_send_server": true, 00:19:14.286 "impl_name": "posix", 00:19:14.286 "recv_buf_size": 2097152, 00:19:14.286 "send_buf_size": 2097152, 00:19:14.286 "tls_version": 0, 00:19:14.286 "zerocopy_threshold": 0 00:19:14.286 } 00:19:14.286 } 00:19:14.286 ] 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "subsystem": "vmd", 00:19:14.286 "config": [] 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "subsystem": "accel", 00:19:14.286 "config": [ 00:19:14.286 { 00:19:14.286 "method": "accel_set_options", 00:19:14.286 "params": { 00:19:14.286 "buf_count": 2048, 00:19:14.286 "large_cache_size": 16, 00:19:14.286 "sequence_count": 2048, 00:19:14.286 "small_cache_size": 128, 00:19:14.286 "task_count": 2048 00:19:14.286 } 00:19:14.286 } 00:19:14.286 ] 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "subsystem": "bdev", 00:19:14.286 "config": [ 00:19:14.286 { 00:19:14.286 "method": "bdev_set_options", 00:19:14.286 "params": { 00:19:14.286 "bdev_auto_examine": true, 00:19:14.286 "bdev_io_cache_size": 256, 00:19:14.286 "bdev_io_pool_size": 65535, 00:19:14.286 "iobuf_large_cache_size": 16, 00:19:14.286 "iobuf_small_cache_size": 128 00:19:14.286 } 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "method": "bdev_raid_set_options", 00:19:14.286 "params": { 00:19:14.286 "process_window_size_kb": 1024 00:19:14.286 } 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "method": "bdev_iscsi_set_options", 00:19:14.286 "params": { 00:19:14.286 "timeout_sec": 30 00:19:14.286 } 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "method": "bdev_nvme_set_options", 00:19:14.286 "params": { 00:19:14.286 "action_on_timeout": "none", 00:19:14.286 "allow_accel_sequence": false, 00:19:14.286 "arbitration_burst": 0, 00:19:14.286 "bdev_retry_count": 3, 00:19:14.286 "ctrlr_loss_timeout_sec": 0, 00:19:14.286 "delay_cmd_submit": true, 00:19:14.286 "dhchap_dhgroups": [ 00:19:14.286 "null", 00:19:14.286 "ffdhe2048", 00:19:14.286 "ffdhe3072", 00:19:14.286 "ffdhe4096", 00:19:14.286 "ffdhe6144", 00:19:14.286 "ffdhe8192" 00:19:14.286 ], 00:19:14.286 "dhchap_digests": [ 00:19:14.286 "sha256", 00:19:14.286 "sha384", 00:19:14.286 "sha512" 00:19:14.286 ], 00:19:14.286 "disable_auto_failback": false, 00:19:14.286 "fast_io_fail_timeout_sec": 0, 00:19:14.286 "generate_uuids": false, 00:19:14.286 "high_priority_weight": 0, 00:19:14.286 "io_path_stat": false, 00:19:14.286 "io_queue_requests": 512, 00:19:14.286 "keep_alive_timeout_ms": 10000, 00:19:14.286 "low_priority_weight": 0, 00:19:14.286 "medium_priority_weight": 0, 00:19:14.286 "nvme_adminq_poll_period_us": 10000, 00:19:14.286 "nvme_error_stat": false, 00:19:14.286 "nvme_ioq_poll_period_us": 0, 00:19:14.286 "rdma_cm_event_timeout_ms": 0, 00:19:14.286 "rdma_max_cq_size": 0, 00:19:14.286 "rdma_srq_size": 0, 00:19:14.286 "reconnect_delay_sec": 0, 00:19:14.286 "timeout_admin_us": 0, 00:19:14.286 "timeout_us": 0, 00:19:14.286 "transport_ack_timeout": 0, 00:19:14.286 "transport_retry_count": 4, 00:19:14.286 "transport_tos": 0 00:19:14.286 } 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "method": "bdev_nvme_attach_controller", 00:19:14.286 "params": { 00:19:14.286 "adrfam": "IPv4", 00:19:14.286 "ctrlr_loss_timeout_sec": 0, 00:19:14.286 "ddgst": false, 00:19:14.286 "fast_io_fail_timeout_sec": 0, 00:19:14.286 "hdgst": false, 00:19:14.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.286 "name": "TLSTEST", 00:19:14.286 "prchk_guard": false, 00:19:14.286 "prchk_reftag": false, 00:19:14.286 "psk": "/tmp/tmp.ub1ZwOYebO", 00:19:14.286 "reconnect_delay_sec": 0, 00:19:14.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.286 "traddr": "10.0.0.2", 00:19:14.286 "trsvcid": "4420", 00:19:14.286 "trtype": "TCP" 00:19:14.286 } 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "method": "bdev_nvme_set_hotplug", 00:19:14.286 "params": { 00:19:14.286 "enable": false, 00:19:14.286 "period_us": 100000 00:19:14.286 } 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "method": "bdev_wait_for_examine" 00:19:14.286 } 00:19:14.286 ] 00:19:14.286 }, 00:19:14.286 { 00:19:14.286 "subsystem": "nbd", 00:19:14.286 "config": [] 00:19:14.286 } 00:19:14.286 ] 00:19:14.286 }' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 85116 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85116 ']' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85116 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85116 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:14.286 killing process with pid 85116 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85116' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85116 00:19:14.286 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.286 00:19:14.286 Latency(us) 00:19:14.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.286 =================================================================================================================== 00:19:14.286 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.286 [2024-07-15 18:44:48.488531] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85116 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 85013 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85013 ']' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85013 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85013 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:14.286 killing process with pid 85013 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85013' 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85013 00:19:14.286 [2024-07-15 18:44:48.725020] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:14.286 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85013 00:19:14.544 18:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:14.544 18:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:14.544 "subsystems": [ 00:19:14.544 { 00:19:14.544 "subsystem": "keyring", 00:19:14.544 "config": [] 00:19:14.544 }, 00:19:14.544 { 00:19:14.544 "subsystem": "iobuf", 00:19:14.544 "config": [ 00:19:14.544 { 00:19:14.544 "method": "iobuf_set_options", 00:19:14.544 "params": { 00:19:14.544 "large_bufsize": 135168, 00:19:14.544 "large_pool_count": 1024, 00:19:14.544 "small_bufsize": 8192, 00:19:14.544 "small_pool_count": 8192 00:19:14.544 } 00:19:14.544 } 00:19:14.544 ] 00:19:14.544 }, 00:19:14.544 { 00:19:14.544 "subsystem": "sock", 00:19:14.544 "config": [ 00:19:14.544 { 00:19:14.544 "method": "sock_set_default_impl", 00:19:14.544 "params": { 00:19:14.544 "impl_name": "posix" 00:19:14.544 } 00:19:14.544 }, 00:19:14.544 { 00:19:14.544 "method": "sock_impl_set_options", 00:19:14.545 "params": { 00:19:14.545 "enable_ktls": false, 00:19:14.545 "enable_placement_id": 0, 00:19:14.545 "enable_quickack": false, 00:19:14.545 "enable_recv_pipe": true, 00:19:14.545 "enable_zerocopy_send_client": false, 00:19:14.545 "enable_zerocopy_send_server": true, 00:19:14.545 "impl_name": "ssl", 00:19:14.545 "recv_buf_size": 4096, 00:19:14.545 "send_buf_size": 4096, 00:19:14.545 "tls_version": 0, 00:19:14.545 "zerocopy_threshold": 0 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "sock_impl_set_options", 00:19:14.545 "params": { 00:19:14.545 "enable_ktls": false, 00:19:14.545 "enable_placement_id": 0, 00:19:14.545 "enable_quickack": false, 00:19:14.545 "enable_recv_pipe": true, 00:19:14.545 "enable_zerocopy_send_client": false, 00:19:14.545 "enable_zerocopy_send_server": true, 00:19:14.545 "impl_name": "posix", 00:19:14.545 "recv_buf_size": 2097152, 00:19:14.545 "send_buf_size": 2097152, 00:19:14.545 "tls_version": 0, 00:19:14.545 "zerocopy_threshold": 0 00:19:14.545 } 00:19:14.545 } 00:19:14.545 ] 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "subsystem": "vmd", 00:19:14.545 "config": [] 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "subsystem": "accel", 00:19:14.545 "config": [ 00:19:14.545 { 00:19:14.545 "method": "accel_set_options", 00:19:14.545 "params": { 00:19:14.545 "buf_count": 2048, 00:19:14.545 "large_cache_size": 16, 00:19:14.545 "sequence_count": 2048, 00:19:14.545 "small_cache_size": 128, 00:19:14.545 "task_count": 2048 00:19:14.545 } 00:19:14.545 } 00:19:14.545 ] 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "subsystem": "bdev", 00:19:14.545 "config": [ 00:19:14.545 { 00:19:14.545 "method": "bdev_set_options", 00:19:14.545 "params": { 00:19:14.545 "bdev_auto_examine": true, 00:19:14.545 "bdev_io_cache_size": 256, 00:19:14.545 "bdev_io_pool_size": 65535, 00:19:14.545 "iobuf_large_cache_size": 16, 00:19:14.545 "iobuf_small_cache_size": 128 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "bdev_raid_set_options", 00:19:14.545 "params": { 00:19:14.545 "process_window_size_kb": 1024 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "bdev_iscsi_set_options", 00:19:14.545 "params": { 00:19:14.545 "timeout_sec": 30 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "bdev_nvme_set_options", 00:19:14.545 "params": { 00:19:14.545 "action_on_timeout": "none", 00:19:14.545 "allow_accel_sequence": false, 00:19:14.545 "arbitration_burst": 0, 00:19:14.545 "bdev_retry_count": 3, 00:19:14.545 "ctrlr_loss_timeout_sec": 0, 00:19:14.545 "delay_cmd_submit": true, 00:19:14.545 "dhchap_dhgroups": [ 00:19:14.545 "null", 00:19:14.545 "ffdhe2048", 00:19:14.545 "ffdhe3072", 00:19:14.545 "ffdhe4096", 00:19:14.545 "ffdhe6144", 00:19:14.545 "ffdhe8192" 00:19:14.545 ], 00:19:14.545 "dhchap_digests": [ 00:19:14.545 "sha256", 00:19:14.545 "sha384", 00:19:14.545 "sha512" 00:19:14.545 ], 00:19:14.545 "disable_auto_failback": false, 00:19:14.545 "fast_io_fail_timeout_sec": 0, 00:19:14.545 "generate_uuids": false, 00:19:14.545 "high_priority_weight": 0, 00:19:14.545 "io_path_stat": false, 00:19:14.545 "io_queue_requests": 0, 00:19:14.545 "keep_alive_timeout_ms": 10000, 00:19:14.545 "low_priority_weight": 0, 00:19:14.545 "medium_priority_weight": 0, 00:19:14.545 "nvme_adminq_poll_period_us": 10000, 00:19:14.545 "nvme_error_stat": false, 00:19:14.545 "nvme_ioq_poll_period_us": 0, 00:19:14.545 "rdma_cm_event_timeout_ms": 0, 00:19:14.545 "rdma_max_cq_size": 0, 00:19:14.545 "rdma_srq_size": 0, 00:19:14.545 "reconnect_delay_sec": 0, 00:19:14.545 "timeout_admin_us": 0, 00:19:14.545 "timeout_us": 0, 00:19:14.545 "transport_ack_timeout": 0, 00:19:14.545 "transport_retry_count": 4, 00:19:14.545 "transport_tos": 0 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "bdev_nvme_set_hotplug", 00:19:14.545 "params": { 00:19:14.545 "enable": false, 00:19:14.545 "period_us": 100000 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "bdev_malloc_create", 00:19:14.545 "params": { 00:19:14.545 "block_size": 4096, 00:19:14.545 "name": "malloc0", 00:19:14.545 "num_blocks": 8192, 00:19:14.545 "optimal_io_boundary": 0, 00:19:14.545 "physical_block_size": 4096, 00:19:14.545 "uuid": "774ceefc-609e-4b33-91c2-18da9b342000" 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "bdev_wait_for_examine" 00:19:14.545 } 00:19:14.545 ] 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "subsystem": "nbd", 00:19:14.545 "config": [] 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "subsystem": "scheduler", 00:19:14.545 "config": [ 00:19:14.545 { 00:19:14.545 "method": "framework_set_scheduler", 00:19:14.545 "params": { 00:19:14.545 "name": "static" 00:19:14.545 } 00:19:14.545 } 00:19:14.545 ] 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "subsystem": "nvmf", 00:19:14.545 "config": [ 00:19:14.545 { 00:19:14.545 "method": "nvmf_set_config", 00:19:14.545 "params": { 00:19:14.545 "admin_cmd_passthru": { 00:19:14.545 "identify_ctrlr": false 00:19:14.545 }, 00:19:14.545 "discovery_filter": "match_any" 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "nvmf_set_max_subsystems", 00:19:14.545 "params": { 00:19:14.545 "max_subsystems": 1024 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "nvmf_set_crdt", 00:19:14.545 "params": { 00:19:14.545 "crdt1": 0, 00:19:14.545 "crdt2": 0, 00:19:14.545 "crdt3": 0 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "nvmf_create_transport", 00:19:14.545 "params": { 00:19:14.545 "abort_timeout_sec": 1, 00:19:14.545 "ack_timeout": 0, 00:19:14.545 "buf_cache_size": 4294967295, 00:19:14.545 "c2h_success": false, 00:19:14.545 "data_wr_pool_size": 0, 00:19:14.545 "dif_insert_or_strip": false, 00:19:14.545 "in_capsule_data_size": 4096, 00:19:14.545 "io_unit_size": 131072, 00:19:14.545 "max_aq_depth": 128, 00:19:14.545 "max_io_qpairs_per_ctrlr": 127, 00:19:14.545 "max_io_size": 131072, 00:19:14.545 "max_queue_depth": 128, 00:19:14.545 "num_shared_buffers": 511, 00:19:14.545 "sock_priority": 0, 00:19:14.545 "trtype": "TCP", 00:19:14.545 "zcopy": false 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "nvmf_create_subsystem", 00:19:14.545 "params": { 00:19:14.545 "allow_any_host": false, 00:19:14.545 "ana_reporting": false, 00:19:14.545 "max_cntlid": 65519, 00:19:14.545 "max_namespaces": 10, 00:19:14.545 "min_cntlid": 1, 00:19:14.545 "model_number": "SPDK bdev Controller", 00:19:14.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.545 "serial_number": "SPDK00000000000001" 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "nvmf_subsystem_add_host", 00:19:14.545 "params": { 00:19:14.545 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.545 "psk": "/tmp/tmp.ub1ZwOYebO" 00:19:14.545 } 00:19:14.545 }, 00:19:14.545 { 00:19:14.545 "method": "nvmf_subsystem_add_ns", 00:19:14.545 "params": { 00:19:14.546 "namespace": { 00:19:14.546 "bdev_name": "malloc0", 00:19:14.546 "nguid": "774CEEFC609E4B3391C218DA9B342000", 00:19:14.546 "no_auto_visible": false, 00:19:14.546 "nsid": 1, 00:19:14.546 "uuid": "774ceefc-609e-4b33-91c2-18da9b342000" 00:19:14.546 }, 00:19:14.546 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:14.546 } 00:19:14.546 }, 00:19:14.546 { 00:19:14.546 "method": "nvmf_subsystem_add_listener", 00:19:14.546 "params": { 00:19:14.546 "listen_address": { 00:19:14.546 "adrfam": "IPv4", 00:19:14.546 "traddr": "10.0.0.2", 00:19:14.546 "trsvcid": "4420", 00:19:14.546 "trtype": "TCP" 00:19:14.546 }, 00:19:14.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.546 "secure_channel": true 00:19:14.546 } 00:19:14.546 } 00:19:14.546 ] 00:19:14.546 } 00:19:14.546 ] 00:19:14.546 }' 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85197 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85197 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85197 ']' 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.546 18:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.546 [2024-07-15 18:44:48.994058] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:14.546 [2024-07-15 18:44:48.994149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.810 [2024-07-15 18:44:49.131280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.810 [2024-07-15 18:44:49.239675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.810 [2024-07-15 18:44:49.239735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.810 [2024-07-15 18:44:49.239746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.810 [2024-07-15 18:44:49.239755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.810 [2024-07-15 18:44:49.239763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.810 [2024-07-15 18:44:49.239855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.067 [2024-07-15 18:44:49.451050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.067 [2024-07-15 18:44:49.467006] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:15.067 [2024-07-15 18:44:49.482999] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.067 [2024-07-15 18:44:49.483213] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.633 18:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.633 18:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.633 18:44:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.633 18:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:15.633 18:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=85243 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 85243 /var/tmp/bdevperf.sock 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85243 ']' 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:15.633 18:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:15.633 "subsystems": [ 00:19:15.633 { 00:19:15.633 "subsystem": "keyring", 00:19:15.633 "config": [] 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "subsystem": "iobuf", 00:19:15.633 "config": [ 00:19:15.633 { 00:19:15.633 "method": "iobuf_set_options", 00:19:15.633 "params": { 00:19:15.633 "large_bufsize": 135168, 00:19:15.633 "large_pool_count": 1024, 00:19:15.633 "small_bufsize": 8192, 00:19:15.633 "small_pool_count": 8192 00:19:15.633 } 00:19:15.633 } 00:19:15.633 ] 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "subsystem": "sock", 00:19:15.633 "config": [ 00:19:15.633 { 00:19:15.633 "method": "sock_set_default_impl", 00:19:15.633 "params": { 00:19:15.633 "impl_name": "posix" 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "sock_impl_set_options", 00:19:15.633 "params": { 00:19:15.633 "enable_ktls": false, 00:19:15.633 "enable_placement_id": 0, 00:19:15.633 "enable_quickack": false, 00:19:15.633 "enable_recv_pipe": true, 00:19:15.633 "enable_zerocopy_send_client": false, 00:19:15.633 "enable_zerocopy_send_server": true, 00:19:15.633 "impl_name": "ssl", 00:19:15.633 "recv_buf_size": 4096, 00:19:15.633 "send_buf_size": 4096, 00:19:15.633 "tls_version": 0, 00:19:15.633 "zerocopy_threshold": 0 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "sock_impl_set_options", 00:19:15.633 "params": { 00:19:15.633 "enable_ktls": false, 00:19:15.633 "enable_placement_id": 0, 00:19:15.633 "enable_quickack": false, 00:19:15.633 "enable_recv_pipe": true, 00:19:15.633 "enable_zerocopy_send_client": false, 00:19:15.633 "enable_zerocopy_send_server": true, 00:19:15.633 "impl_name": "posix", 00:19:15.633 "recv_buf_size": 2097152, 00:19:15.633 "send_buf_size": 2097152, 00:19:15.633 "tls_version": 0, 00:19:15.633 "zerocopy_threshold": 0 00:19:15.633 } 00:19:15.633 } 00:19:15.633 ] 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "subsystem": "vmd", 00:19:15.633 "config": [] 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "subsystem": "accel", 00:19:15.633 "config": [ 00:19:15.633 { 00:19:15.633 "method": "accel_set_options", 00:19:15.633 "params": { 00:19:15.633 "buf_count": 2048, 00:19:15.633 "large_cache_size": 16, 00:19:15.633 "sequence_count": 2048, 00:19:15.633 "small_cache_size": 128, 00:19:15.633 "task_count": 2048 00:19:15.633 } 00:19:15.633 } 00:19:15.633 ] 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "subsystem": "bdev", 00:19:15.633 "config": [ 00:19:15.633 { 00:19:15.633 "method": "bdev_set_options", 00:19:15.633 "params": { 00:19:15.633 "bdev_auto_examine": true, 00:19:15.633 "bdev_io_cache_size": 256, 00:19:15.633 "bdev_io_pool_size": 65535, 00:19:15.633 "iobuf_large_cache_size": 16, 00:19:15.633 "iobuf_small_cache_size": 128 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "bdev_raid_set_options", 00:19:15.633 "params": { 00:19:15.633 "process_window_size_kb": 1024 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "bdev_iscsi_set_options", 00:19:15.633 "params": { 00:19:15.633 "timeout_sec": 30 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "bdev_nvme_set_options", 00:19:15.633 "params": { 00:19:15.633 "action_on_timeout": "none", 00:19:15.633 "allow_accel_sequence": false, 00:19:15.633 "arbitration_burst": 0, 00:19:15.633 "bdev_retry_count": 3, 00:19:15.633 "ctrlr_loss_timeout_sec": 0, 00:19:15.633 "delay_cmd_submit": true, 00:19:15.633 "dhchap_dhgroups": [ 00:19:15.633 "null", 00:19:15.633 "ffdhe2048", 00:19:15.633 "ffdhe3072", 00:19:15.633 "ffdhe4096", 00:19:15.633 "ffdhe6144", 00:19:15.633 "ffdhe8192" 00:19:15.633 ], 00:19:15.633 "dhchap_digests": [ 00:19:15.633 "sha256", 00:19:15.633 "sha384", 00:19:15.633 "sha512" 00:19:15.633 ], 00:19:15.633 "disable_auto_failback": false, 00:19:15.633 "fast_io_fail_timeout_sec": 0, 00:19:15.633 "generate_uuids": false, 00:19:15.633 "high_priority_weight": 0, 00:19:15.633 "io_path_stat": false, 00:19:15.633 "io_queue_requests": 512, 00:19:15.633 "keep_alive_timeout_ms": 10000, 00:19:15.633 "low_priority_weight": 0, 00:19:15.633 "medium_priority_weight": 0, 00:19:15.633 "nvme_adminq_poll_period_us": 10000, 00:19:15.633 "nvme_error_stat": false, 00:19:15.633 "nvme_ioq_poll_period_us": 0, 00:19:15.633 "rdma_cm_event_timeout_ms": 0, 00:19:15.633 "rdma_max_cq_size": 0, 00:19:15.633 "rdma_srq_size": 0, 00:19:15.633 "reconnect_delay_sec": 0, 00:19:15.633 "timeout_admin_us": 0, 00:19:15.633 "timeout_us": 0, 00:19:15.633 "transport_ack_timeout": 0, 00:19:15.633 "transport_retry_count": 4, 00:19:15.633 "transport_tos": 0 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "bdev_nvme_attach_controller", 00:19:15.633 "params": { 00:19:15.633 "adrfam": "IPv4", 00:19:15.633 "ctrlr_loss_timeout_sec": 0, 00:19:15.633 "ddgst": false, 00:19:15.633 "fast_io_fail_timeout_sec": 0, 00:19:15.633 "hdgst": false, 00:19:15.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.633 "name": "TLSTEST", 00:19:15.633 "prchk_guard": false, 00:19:15.633 "prchk_reftag": false, 00:19:15.633 "psk": "/tmp/tmp.ub1ZwOYebO", 00:19:15.633 "reconnect_delay_sec": 0, 00:19:15.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.633 "traddr": "10.0.0.2", 00:19:15.633 "trsvcid": "4420", 00:19:15.633 "trtype": "TCP" 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "bdev_nvme_set_hotplug", 00:19:15.633 "params": { 00:19:15.633 "enable": false, 00:19:15.633 "period_us": 100000 00:19:15.633 } 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "method": "bdev_wait_for_examine" 00:19:15.633 } 00:19:15.633 ] 00:19:15.633 }, 00:19:15.633 { 00:19:15.633 "subsystem": "nbd", 00:19:15.633 "config": [] 00:19:15.633 } 00:19:15.633 ] 00:19:15.633 }' 00:19:15.633 [2024-07-15 18:44:50.100066] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:15.633 [2024-07-15 18:44:50.100179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85243 ] 00:19:15.891 [2024-07-15 18:44:50.244875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.891 [2024-07-15 18:44:50.362573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.149 [2024-07-15 18:44:50.513209] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.149 [2024-07-15 18:44:50.513325] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.715 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.715 18:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:16.715 18:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:16.715 Running I/O for 10 seconds... 00:19:26.689 00:19:26.689 Latency(us) 00:19:26.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.689 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.689 Verification LBA range: start 0x0 length 0x2000 00:19:26.689 TLSTESTn1 : 10.01 4721.50 18.44 0.00 0.00 27065.39 5305.30 22594.32 00:19:26.689 =================================================================================================================== 00:19:26.689 Total : 4721.50 18.44 0.00 0.00 27065.39 5305.30 22594.32 00:19:26.689 0 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 85243 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85243 ']' 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85243 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85243 00:19:26.689 killing process with pid 85243 00:19:26.689 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.689 00:19:26.689 Latency(us) 00:19:26.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.689 =================================================================================================================== 00:19:26.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85243' 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85243 00:19:26.689 [2024-07-15 18:45:01.136658] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:26.689 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85243 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 85197 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85197 ']' 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85197 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85197 00:19:26.948 killing process with pid 85197 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85197' 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85197 00:19:26.948 [2024-07-15 18:45:01.370062] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:26.948 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85197 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85393 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85393 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85393 ']' 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.206 18:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.206 [2024-07-15 18:45:01.644884] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:27.206 [2024-07-15 18:45:01.645323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.464 [2024-07-15 18:45:01.794573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.464 [2024-07-15 18:45:01.911255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.464 [2024-07-15 18:45:01.911581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.464 [2024-07-15 18:45:01.911605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.464 [2024-07-15 18:45:01.911618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.464 [2024-07-15 18:45:01.911630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.464 [2024-07-15 18:45:01.911669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ub1ZwOYebO 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ub1ZwOYebO 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.401 [2024-07-15 18:45:02.826632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.401 18:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.659 18:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.919 [2024-07-15 18:45:03.318756] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.919 [2024-07-15 18:45:03.319023] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.919 18:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.176 malloc0 00:19:29.176 18:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.434 18:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ub1ZwOYebO 00:19:29.691 [2024-07-15 18:45:04.032563] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85490 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85490 /var/tmp/bdevperf.sock 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85490 ']' 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.691 18:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.691 [2024-07-15 18:45:04.097218] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:29.691 [2024-07-15 18:45:04.097310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85490 ] 00:19:29.950 [2024-07-15 18:45:04.241149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.950 [2024-07-15 18:45:04.359835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.885 18:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.885 18:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.885 18:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ub1ZwOYebO 00:19:30.885 18:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:31.216 [2024-07-15 18:45:05.581906] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.216 nvme0n1 00:19:31.216 18:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.475 Running I/O for 1 seconds... 00:19:32.408 00:19:32.408 Latency(us) 00:19:32.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.408 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:32.409 Verification LBA range: start 0x0 length 0x2000 00:19:32.409 nvme0n1 : 1.01 4602.65 17.98 0.00 0.00 27553.98 6428.77 22344.66 00:19:32.409 =================================================================================================================== 00:19:32.409 Total : 4602.65 17.98 0.00 0.00 27553.98 6428.77 22344.66 00:19:32.409 0 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85490 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85490 ']' 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85490 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85490 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:32.409 killing process with pid 85490 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85490' 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85490 00:19:32.409 Received shutdown signal, test time was about 1.000000 seconds 00:19:32.409 00:19:32.409 Latency(us) 00:19:32.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.409 =================================================================================================================== 00:19:32.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.409 18:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85490 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85393 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85393 ']' 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85393 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85393 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:32.666 killing process with pid 85393 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85393' 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85393 00:19:32.666 [2024-07-15 18:45:07.098082] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:32.666 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85393 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85565 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85565 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85565 ']' 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.924 18:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.924 [2024-07-15 18:45:07.381890] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:32.924 [2024-07-15 18:45:07.382033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.183 [2024-07-15 18:45:07.530879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.183 [2024-07-15 18:45:07.638738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.183 [2024-07-15 18:45:07.638790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.183 [2024-07-15 18:45:07.638801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.183 [2024-07-15 18:45:07.638810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.183 [2024-07-15 18:45:07.638818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.183 [2024-07-15 18:45:07.638846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.116 [2024-07-15 18:45:08.368343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.116 malloc0 00:19:34.116 [2024-07-15 18:45:08.398381] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.116 [2024-07-15 18:45:08.398743] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85621 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85621 /var/tmp/bdevperf.sock 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85621 ']' 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.116 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.117 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.117 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.117 18:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.117 [2024-07-15 18:45:08.483516] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:34.117 [2024-07-15 18:45:08.483608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85621 ] 00:19:34.375 [2024-07-15 18:45:08.623246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.375 [2024-07-15 18:45:08.766368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.306 18:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.306 18:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:35.306 18:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ub1ZwOYebO 00:19:35.306 18:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:35.564 [2024-07-15 18:45:09.903561] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.564 nvme0n1 00:19:35.564 18:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.835 Running I/O for 1 seconds... 00:19:36.798 00:19:36.798 Latency(us) 00:19:36.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.798 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.798 Verification LBA range: start 0x0 length 0x2000 00:19:36.798 nvme0n1 : 1.01 4616.55 18.03 0.00 0.00 27569.47 3292.40 23717.79 00:19:36.798 =================================================================================================================== 00:19:36.798 Total : 4616.55 18.03 0.00 0.00 27569.47 3292.40 23717.79 00:19:36.798 0 00:19:36.798 18:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:36.798 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.798 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.056 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.056 18:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:37.056 "subsystems": [ 00:19:37.056 { 00:19:37.056 "subsystem": "keyring", 00:19:37.056 "config": [ 00:19:37.056 { 00:19:37.056 "method": "keyring_file_add_key", 00:19:37.056 "params": { 00:19:37.056 "name": "key0", 00:19:37.056 "path": "/tmp/tmp.ub1ZwOYebO" 00:19:37.056 } 00:19:37.056 } 00:19:37.056 ] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "iobuf", 00:19:37.056 "config": [ 00:19:37.056 { 00:19:37.056 "method": "iobuf_set_options", 00:19:37.056 "params": { 00:19:37.056 "large_bufsize": 135168, 00:19:37.056 "large_pool_count": 1024, 00:19:37.056 "small_bufsize": 8192, 00:19:37.056 "small_pool_count": 8192 00:19:37.056 } 00:19:37.056 } 00:19:37.056 ] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "sock", 00:19:37.056 "config": [ 00:19:37.056 { 00:19:37.056 "method": "sock_set_default_impl", 00:19:37.056 "params": { 00:19:37.056 "impl_name": "posix" 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "sock_impl_set_options", 00:19:37.056 "params": { 00:19:37.056 "enable_ktls": false, 00:19:37.056 "enable_placement_id": 0, 00:19:37.056 "enable_quickack": false, 00:19:37.056 "enable_recv_pipe": true, 00:19:37.056 "enable_zerocopy_send_client": false, 00:19:37.056 "enable_zerocopy_send_server": true, 00:19:37.056 "impl_name": "ssl", 00:19:37.056 "recv_buf_size": 4096, 00:19:37.056 "send_buf_size": 4096, 00:19:37.056 "tls_version": 0, 00:19:37.056 "zerocopy_threshold": 0 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "sock_impl_set_options", 00:19:37.056 "params": { 00:19:37.056 "enable_ktls": false, 00:19:37.056 "enable_placement_id": 0, 00:19:37.056 "enable_quickack": false, 00:19:37.056 "enable_recv_pipe": true, 00:19:37.056 "enable_zerocopy_send_client": false, 00:19:37.056 "enable_zerocopy_send_server": true, 00:19:37.056 "impl_name": "posix", 00:19:37.056 "recv_buf_size": 2097152, 00:19:37.056 "send_buf_size": 2097152, 00:19:37.056 "tls_version": 0, 00:19:37.056 "zerocopy_threshold": 0 00:19:37.056 } 00:19:37.056 } 00:19:37.056 ] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "vmd", 00:19:37.056 "config": [] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "accel", 00:19:37.056 "config": [ 00:19:37.056 { 00:19:37.056 "method": "accel_set_options", 00:19:37.056 "params": { 00:19:37.056 "buf_count": 2048, 00:19:37.056 "large_cache_size": 16, 00:19:37.056 "sequence_count": 2048, 00:19:37.056 "small_cache_size": 128, 00:19:37.056 "task_count": 2048 00:19:37.056 } 00:19:37.056 } 00:19:37.056 ] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "bdev", 00:19:37.056 "config": [ 00:19:37.056 { 00:19:37.056 "method": "bdev_set_options", 00:19:37.056 "params": { 00:19:37.056 "bdev_auto_examine": true, 00:19:37.056 "bdev_io_cache_size": 256, 00:19:37.056 "bdev_io_pool_size": 65535, 00:19:37.056 "iobuf_large_cache_size": 16, 00:19:37.056 "iobuf_small_cache_size": 128 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "bdev_raid_set_options", 00:19:37.056 "params": { 00:19:37.056 "process_window_size_kb": 1024 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "bdev_iscsi_set_options", 00:19:37.056 "params": { 00:19:37.056 "timeout_sec": 30 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "bdev_nvme_set_options", 00:19:37.056 "params": { 00:19:37.056 "action_on_timeout": "none", 00:19:37.056 "allow_accel_sequence": false, 00:19:37.056 "arbitration_burst": 0, 00:19:37.056 "bdev_retry_count": 3, 00:19:37.056 "ctrlr_loss_timeout_sec": 0, 00:19:37.056 "delay_cmd_submit": true, 00:19:37.056 "dhchap_dhgroups": [ 00:19:37.056 "null", 00:19:37.056 "ffdhe2048", 00:19:37.056 "ffdhe3072", 00:19:37.056 "ffdhe4096", 00:19:37.056 "ffdhe6144", 00:19:37.056 "ffdhe8192" 00:19:37.056 ], 00:19:37.056 "dhchap_digests": [ 00:19:37.056 "sha256", 00:19:37.056 "sha384", 00:19:37.056 "sha512" 00:19:37.056 ], 00:19:37.056 "disable_auto_failback": false, 00:19:37.056 "fast_io_fail_timeout_sec": 0, 00:19:37.056 "generate_uuids": false, 00:19:37.056 "high_priority_weight": 0, 00:19:37.056 "io_path_stat": false, 00:19:37.056 "io_queue_requests": 0, 00:19:37.056 "keep_alive_timeout_ms": 10000, 00:19:37.056 "low_priority_weight": 0, 00:19:37.056 "medium_priority_weight": 0, 00:19:37.056 "nvme_adminq_poll_period_us": 10000, 00:19:37.056 "nvme_error_stat": false, 00:19:37.056 "nvme_ioq_poll_period_us": 0, 00:19:37.056 "rdma_cm_event_timeout_ms": 0, 00:19:37.056 "rdma_max_cq_size": 0, 00:19:37.056 "rdma_srq_size": 0, 00:19:37.056 "reconnect_delay_sec": 0, 00:19:37.056 "timeout_admin_us": 0, 00:19:37.056 "timeout_us": 0, 00:19:37.056 "transport_ack_timeout": 0, 00:19:37.056 "transport_retry_count": 4, 00:19:37.056 "transport_tos": 0 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "bdev_nvme_set_hotplug", 00:19:37.056 "params": { 00:19:37.056 "enable": false, 00:19:37.056 "period_us": 100000 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "bdev_malloc_create", 00:19:37.056 "params": { 00:19:37.056 "block_size": 4096, 00:19:37.056 "name": "malloc0", 00:19:37.056 "num_blocks": 8192, 00:19:37.056 "optimal_io_boundary": 0, 00:19:37.056 "physical_block_size": 4096, 00:19:37.056 "uuid": "f13d1232-956b-481b-80e3-64f87dce82fb" 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "bdev_wait_for_examine" 00:19:37.056 } 00:19:37.056 ] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "nbd", 00:19:37.056 "config": [] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "scheduler", 00:19:37.056 "config": [ 00:19:37.056 { 00:19:37.056 "method": "framework_set_scheduler", 00:19:37.056 "params": { 00:19:37.056 "name": "static" 00:19:37.056 } 00:19:37.056 } 00:19:37.056 ] 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "subsystem": "nvmf", 00:19:37.056 "config": [ 00:19:37.056 { 00:19:37.056 "method": "nvmf_set_config", 00:19:37.056 "params": { 00:19:37.056 "admin_cmd_passthru": { 00:19:37.056 "identify_ctrlr": false 00:19:37.056 }, 00:19:37.056 "discovery_filter": "match_any" 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "nvmf_set_max_subsystems", 00:19:37.056 "params": { 00:19:37.056 "max_subsystems": 1024 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "nvmf_set_crdt", 00:19:37.056 "params": { 00:19:37.056 "crdt1": 0, 00:19:37.056 "crdt2": 0, 00:19:37.056 "crdt3": 0 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "nvmf_create_transport", 00:19:37.056 "params": { 00:19:37.056 "abort_timeout_sec": 1, 00:19:37.056 "ack_timeout": 0, 00:19:37.056 "buf_cache_size": 4294967295, 00:19:37.056 "c2h_success": false, 00:19:37.056 "data_wr_pool_size": 0, 00:19:37.056 "dif_insert_or_strip": false, 00:19:37.056 "in_capsule_data_size": 4096, 00:19:37.056 "io_unit_size": 131072, 00:19:37.056 "max_aq_depth": 128, 00:19:37.056 "max_io_qpairs_per_ctrlr": 127, 00:19:37.056 "max_io_size": 131072, 00:19:37.056 "max_queue_depth": 128, 00:19:37.056 "num_shared_buffers": 511, 00:19:37.056 "sock_priority": 0, 00:19:37.056 "trtype": "TCP", 00:19:37.056 "zcopy": false 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "nvmf_create_subsystem", 00:19:37.056 "params": { 00:19:37.056 "allow_any_host": false, 00:19:37.056 "ana_reporting": false, 00:19:37.056 "max_cntlid": 65519, 00:19:37.056 "max_namespaces": 32, 00:19:37.056 "min_cntlid": 1, 00:19:37.056 "model_number": "SPDK bdev Controller", 00:19:37.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.056 "serial_number": "00000000000000000000" 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "nvmf_subsystem_add_host", 00:19:37.056 "params": { 00:19:37.056 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.056 "psk": "key0" 00:19:37.056 } 00:19:37.056 }, 00:19:37.056 { 00:19:37.056 "method": "nvmf_subsystem_add_ns", 00:19:37.056 "params": { 00:19:37.057 "namespace": { 00:19:37.057 "bdev_name": "malloc0", 00:19:37.057 "nguid": "F13D1232956B481B80E364F87DCE82FB", 00:19:37.057 "no_auto_visible": false, 00:19:37.057 "nsid": 1, 00:19:37.057 "uuid": "f13d1232-956b-481b-80e3-64f87dce82fb" 00:19:37.057 }, 00:19:37.057 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:37.057 } 00:19:37.057 }, 00:19:37.057 { 00:19:37.057 "method": "nvmf_subsystem_add_listener", 00:19:37.057 "params": { 00:19:37.057 "listen_address": { 00:19:37.057 "adrfam": "IPv4", 00:19:37.057 "traddr": "10.0.0.2", 00:19:37.057 "trsvcid": "4420", 00:19:37.057 "trtype": "TCP" 00:19:37.057 }, 00:19:37.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.057 "secure_channel": false, 00:19:37.057 "sock_impl": "ssl" 00:19:37.057 } 00:19:37.057 } 00:19:37.057 ] 00:19:37.057 } 00:19:37.057 ] 00:19:37.057 }' 00:19:37.057 18:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:37.315 18:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:37.315 "subsystems": [ 00:19:37.315 { 00:19:37.315 "subsystem": "keyring", 00:19:37.315 "config": [ 00:19:37.315 { 00:19:37.315 "method": "keyring_file_add_key", 00:19:37.315 "params": { 00:19:37.315 "name": "key0", 00:19:37.315 "path": "/tmp/tmp.ub1ZwOYebO" 00:19:37.315 } 00:19:37.315 } 00:19:37.315 ] 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "subsystem": "iobuf", 00:19:37.315 "config": [ 00:19:37.315 { 00:19:37.315 "method": "iobuf_set_options", 00:19:37.315 "params": { 00:19:37.315 "large_bufsize": 135168, 00:19:37.315 "large_pool_count": 1024, 00:19:37.315 "small_bufsize": 8192, 00:19:37.315 "small_pool_count": 8192 00:19:37.315 } 00:19:37.315 } 00:19:37.315 ] 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "subsystem": "sock", 00:19:37.315 "config": [ 00:19:37.315 { 00:19:37.315 "method": "sock_set_default_impl", 00:19:37.315 "params": { 00:19:37.315 "impl_name": "posix" 00:19:37.315 } 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "method": "sock_impl_set_options", 00:19:37.315 "params": { 00:19:37.315 "enable_ktls": false, 00:19:37.315 "enable_placement_id": 0, 00:19:37.315 "enable_quickack": false, 00:19:37.315 "enable_recv_pipe": true, 00:19:37.315 "enable_zerocopy_send_client": false, 00:19:37.315 "enable_zerocopy_send_server": true, 00:19:37.315 "impl_name": "ssl", 00:19:37.315 "recv_buf_size": 4096, 00:19:37.315 "send_buf_size": 4096, 00:19:37.315 "tls_version": 0, 00:19:37.315 "zerocopy_threshold": 0 00:19:37.315 } 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "method": "sock_impl_set_options", 00:19:37.315 "params": { 00:19:37.315 "enable_ktls": false, 00:19:37.315 "enable_placement_id": 0, 00:19:37.315 "enable_quickack": false, 00:19:37.315 "enable_recv_pipe": true, 00:19:37.315 "enable_zerocopy_send_client": false, 00:19:37.315 "enable_zerocopy_send_server": true, 00:19:37.315 "impl_name": "posix", 00:19:37.315 "recv_buf_size": 2097152, 00:19:37.315 "send_buf_size": 2097152, 00:19:37.315 "tls_version": 0, 00:19:37.315 "zerocopy_threshold": 0 00:19:37.315 } 00:19:37.315 } 00:19:37.315 ] 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "subsystem": "vmd", 00:19:37.315 "config": [] 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "subsystem": "accel", 00:19:37.315 "config": [ 00:19:37.315 { 00:19:37.315 "method": "accel_set_options", 00:19:37.315 "params": { 00:19:37.315 "buf_count": 2048, 00:19:37.315 "large_cache_size": 16, 00:19:37.315 "sequence_count": 2048, 00:19:37.315 "small_cache_size": 128, 00:19:37.315 "task_count": 2048 00:19:37.315 } 00:19:37.315 } 00:19:37.315 ] 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "subsystem": "bdev", 00:19:37.315 "config": [ 00:19:37.315 { 00:19:37.315 "method": "bdev_set_options", 00:19:37.315 "params": { 00:19:37.315 "bdev_auto_examine": true, 00:19:37.315 "bdev_io_cache_size": 256, 00:19:37.315 "bdev_io_pool_size": 65535, 00:19:37.315 "iobuf_large_cache_size": 16, 00:19:37.315 "iobuf_small_cache_size": 128 00:19:37.315 } 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "method": "bdev_raid_set_options", 00:19:37.315 "params": { 00:19:37.315 "process_window_size_kb": 1024 00:19:37.315 } 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "method": "bdev_iscsi_set_options", 00:19:37.315 "params": { 00:19:37.315 "timeout_sec": 30 00:19:37.315 } 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "method": "bdev_nvme_set_options", 00:19:37.315 "params": { 00:19:37.315 "action_on_timeout": "none", 00:19:37.315 "allow_accel_sequence": false, 00:19:37.315 "arbitration_burst": 0, 00:19:37.315 "bdev_retry_count": 3, 00:19:37.315 "ctrlr_loss_timeout_sec": 0, 00:19:37.315 "delay_cmd_submit": true, 00:19:37.315 "dhchap_dhgroups": [ 00:19:37.315 "null", 00:19:37.315 "ffdhe2048", 00:19:37.315 "ffdhe3072", 00:19:37.315 "ffdhe4096", 00:19:37.315 "ffdhe6144", 00:19:37.315 "ffdhe8192" 00:19:37.315 ], 00:19:37.315 "dhchap_digests": [ 00:19:37.315 "sha256", 00:19:37.315 "sha384", 00:19:37.315 "sha512" 00:19:37.315 ], 00:19:37.315 "disable_auto_failback": false, 00:19:37.315 "fast_io_fail_timeout_sec": 0, 00:19:37.315 "generate_uuids": false, 00:19:37.315 "high_priority_weight": 0, 00:19:37.315 "io_path_stat": false, 00:19:37.315 "io_queue_requests": 512, 00:19:37.315 "keep_alive_timeout_ms": 10000, 00:19:37.315 "low_priority_weight": 0, 00:19:37.315 "medium_priority_weight": 0, 00:19:37.315 "nvme_adminq_poll_period_us": 10000, 00:19:37.315 "nvme_error_stat": false, 00:19:37.315 "nvme_ioq_poll_period_us": 0, 00:19:37.315 "rdma_cm_event_timeout_ms": 0, 00:19:37.315 "rdma_max_cq_size": 0, 00:19:37.315 "rdma_srq_size": 0, 00:19:37.315 "reconnect_delay_sec": 0, 00:19:37.315 "timeout_admin_us": 0, 00:19:37.315 "timeout_us": 0, 00:19:37.315 "transport_ack_timeout": 0, 00:19:37.315 "transport_retry_count": 4, 00:19:37.315 "transport_tos": 0 00:19:37.315 } 00:19:37.315 }, 00:19:37.315 { 00:19:37.315 "method": "bdev_nvme_attach_controller", 00:19:37.315 "params": { 00:19:37.316 "adrfam": "IPv4", 00:19:37.316 "ctrlr_loss_timeout_sec": 0, 00:19:37.316 "ddgst": false, 00:19:37.316 "fast_io_fail_timeout_sec": 0, 00:19:37.316 "hdgst": false, 00:19:37.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.316 "name": "nvme0", 00:19:37.316 "prchk_guard": false, 00:19:37.316 "prchk_reftag": false, 00:19:37.316 "psk": "key0", 00:19:37.316 "reconnect_delay_sec": 0, 00:19:37.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.316 "traddr": "10.0.0.2", 00:19:37.316 "trsvcid": "4420", 00:19:37.316 "trtype": "TCP" 00:19:37.316 } 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "method": "bdev_nvme_set_hotplug", 00:19:37.316 "params": { 00:19:37.316 "enable": false, 00:19:37.316 "period_us": 100000 00:19:37.316 } 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "method": "bdev_enable_histogram", 00:19:37.316 "params": { 00:19:37.316 "enable": true, 00:19:37.316 "name": "nvme0n1" 00:19:37.316 } 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "method": "bdev_wait_for_examine" 00:19:37.316 } 00:19:37.316 ] 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "subsystem": "nbd", 00:19:37.316 "config": [] 00:19:37.316 } 00:19:37.316 ] 00:19:37.316 }' 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 85621 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85621 ']' 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85621 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85621 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.316 killing process with pid 85621 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85621' 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85621 00:19:37.316 Received shutdown signal, test time was about 1.000000 seconds 00:19:37.316 00:19:37.316 Latency(us) 00:19:37.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.316 =================================================================================================================== 00:19:37.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.316 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85621 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 85565 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85565 ']' 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85565 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85565 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:37.573 killing process with pid 85565 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85565' 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85565 00:19:37.573 18:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85565 00:19:37.831 18:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:37.831 18:45:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.831 18:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:37.831 "subsystems": [ 00:19:37.831 { 00:19:37.831 "subsystem": "keyring", 00:19:37.831 "config": [ 00:19:37.831 { 00:19:37.831 "method": "keyring_file_add_key", 00:19:37.831 "params": { 00:19:37.831 "name": "key0", 00:19:37.831 "path": "/tmp/tmp.ub1ZwOYebO" 00:19:37.831 } 00:19:37.831 } 00:19:37.831 ] 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "subsystem": "iobuf", 00:19:37.831 "config": [ 00:19:37.831 { 00:19:37.831 "method": "iobuf_set_options", 00:19:37.831 "params": { 00:19:37.831 "large_bufsize": 135168, 00:19:37.831 "large_pool_count": 1024, 00:19:37.831 "small_bufsize": 8192, 00:19:37.831 "small_pool_count": 8192 00:19:37.831 } 00:19:37.831 } 00:19:37.831 ] 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "subsystem": "sock", 00:19:37.831 "config": [ 00:19:37.831 { 00:19:37.831 "method": "sock_set_default_impl", 00:19:37.831 "params": { 00:19:37.831 "impl_name": "posix" 00:19:37.831 } 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "method": "sock_impl_set_options", 00:19:37.831 "params": { 00:19:37.831 "enable_ktls": false, 00:19:37.831 "enable_placement_id": 0, 00:19:37.831 "enable_quickack": false, 00:19:37.831 "enable_recv_pipe": true, 00:19:37.831 "enable_zerocopy_send_client": false, 00:19:37.831 "enable_zerocopy_send_server": true, 00:19:37.831 "impl_name": "ssl", 00:19:37.831 "recv_buf_size": 4096, 00:19:37.831 "send_buf_size": 4096, 00:19:37.831 "tls_version": 0, 00:19:37.831 "zerocopy_threshold": 0 00:19:37.831 } 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "method": "sock_impl_set_options", 00:19:37.831 "params": { 00:19:37.831 "enable_ktls": false, 00:19:37.831 "enable_placement_id": 0, 00:19:37.831 "enable_quickack": false, 00:19:37.831 "enable_recv_pipe": true, 00:19:37.831 "enable_zerocopy_send_client": false, 00:19:37.831 "enable_zerocopy_send_server": true, 00:19:37.831 "impl_name": "posix", 00:19:37.831 "recv_buf_size": 2097152, 00:19:37.831 "send_buf_size": 2097152, 00:19:37.831 "tls_version": 0, 00:19:37.831 "zerocopy_threshold": 0 00:19:37.831 } 00:19:37.831 } 00:19:37.831 ] 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "subsystem": "vmd", 00:19:37.831 "config": [] 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "subsystem": "accel", 00:19:37.831 "config": [ 00:19:37.831 { 00:19:37.831 "method": "accel_set_options", 00:19:37.831 "params": { 00:19:37.831 "buf_count": 2048, 00:19:37.831 "large_cache_size": 16, 00:19:37.831 "sequence_count": 2048, 00:19:37.831 "small_cache_size": 128, 00:19:37.831 "task_count": 2048 00:19:37.831 } 00:19:37.831 } 00:19:37.831 ] 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "subsystem": "bdev", 00:19:37.831 "config": [ 00:19:37.831 { 00:19:37.831 "method": "bdev_set_options", 00:19:37.831 "params": { 00:19:37.831 "bdev_auto_examine": true, 00:19:37.831 "bdev_io_cache_size": 256, 00:19:37.831 "bdev_io_pool_size": 65535, 00:19:37.831 "iobuf_large_cache_size": 16, 00:19:37.831 "iobuf_small_cache_size": 128 00:19:37.831 } 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "method": "bdev_raid_set_options", 00:19:37.831 "params": { 00:19:37.831 "process_window_size_kb": 1024 00:19:37.831 } 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "method": "bdev_iscsi_set_options", 00:19:37.831 "params": { 00:19:37.831 "timeout_sec": 30 00:19:37.831 } 00:19:37.831 }, 00:19:37.831 { 00:19:37.831 "method": "bdev_nvme_set_options", 00:19:37.831 "params": { 00:19:37.831 "action_on_timeout": "none", 00:19:37.831 "allow_accel_sequence": false, 00:19:37.832 "arbitration_burst": 0, 00:19:37.832 "bdev_retry_count": 3, 00:19:37.832 "ctrlr_loss_timeout_sec": 0, 00:19:37.832 "delay_cmd_submit": true, 00:19:37.832 "dhchap_dhgroups": [ 00:19:37.832 "null", 00:19:37.832 "ffdhe2048", 00:19:37.832 "ffdhe3072", 00:19:37.832 "ffdhe4096", 00:19:37.832 "ffdhe6144", 00:19:37.832 "ffdhe8192" 00:19:37.832 ], 00:19:37.832 "dhchap_digests": [ 00:19:37.832 "sha256", 00:19:37.832 "sha384", 00:19:37.832 "sha512" 00:19:37.832 ], 00:19:37.832 "disable_auto_failback": false, 00:19:37.832 "fast_io_fail_timeout_sec": 0, 00:19:37.832 "generate_uuids": false, 00:19:37.832 "high_priority_weight": 0, 00:19:37.832 "io_path_stat": false, 00:19:37.832 "io_queue_requests": 0, 00:19:37.832 "keep_alive_timeout_ms": 10000, 00:19:37.832 "low_priority_weight": 0, 00:19:37.832 "medium_priority_weight": 0, 00:19:37.832 "nvme_adminq_poll_period_us": 10000, 00:19:37.832 "nvme_error_stat": false, 00:19:37.832 "nvme_ioq_poll_period_us": 0, 00:19:37.832 "rdma_cm_event_timeout_ms": 0, 00:19:37.832 "rdma_max_cq_size": 0, 00:19:37.832 "rdma_srq_size": 0, 00:19:37.832 "reconnect_delay_sec": 0, 00:19:37.832 "timeout_admin_us": 0, 00:19:37.832 "timeout_us": 0, 00:19:37.832 "transport_ack_timeout": 0, 00:19:37.832 "transport_retry_count": 4, 00:19:37.832 "transport_tos": 0 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "bdev_nvme_set_hotplug", 00:19:37.832 "params": { 00:19:37.832 "enable": false, 00:19:37.832 "period_us": 100000 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "bdev_malloc_create", 00:19:37.832 "params": { 00:19:37.832 "block_size": 4096, 00:19:37.832 "name": "malloc0", 00:19:37.832 "num_blocks": 8192, 00:19:37.832 "optimal_io_boundary": 0, 00:19:37.832 "physical_block_size": 4096, 00:19:37.832 "uuid": "f13d1232-956b-481b-80e3-64f87dce82fb" 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "bdev_wait_for_examine" 00:19:37.832 } 00:19:37.832 ] 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "subsystem": "nbd", 00:19:37.832 "config": [] 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "subsystem": "scheduler", 00:19:37.832 "config": [ 00:19:37.832 { 00:19:37.832 "method": "framework_set_scheduler", 00:19:37.832 "params": { 00:19:37.832 "name": "static" 00:19:37.832 } 00:19:37.832 } 00:19:37.832 ] 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "subsystem": "nvmf", 00:19:37.832 "config": [ 00:19:37.832 { 00:19:37.832 "method": "nvmf_set_config", 00:19:37.832 "params": { 00:19:37.832 "admin_cmd_passthru": { 00:19:37.832 "identify_ctrlr": false 00:19:37.832 }, 00:19:37.832 "discovery_filter": "match_any" 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "nvmf_set_max_subsystems", 00:19:37.832 "params": { 00:19:37.832 "max_subsystems": 1024 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "nvmf_set_crdt", 00:19:37.832 "params": { 00:19:37.832 "crdt1": 0, 00:19:37.832 "crdt2": 0, 00:19:37.832 "crdt3": 0 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "nvmf_create_transport", 00:19:37.832 "params": { 00:19:37.832 "abort_timeout_sec": 1, 00:19:37.832 "ack_timeout": 0, 00:19:37.832 "buf_cache_size": 4294967295, 00:19:37.832 "c2h_success": false, 00:19:37.832 "data_wr_pool_size": 0, 00:19:37.832 "dif_insert_or_strip": false, 00:19:37.832 "in_capsule_data_size": 4096, 00:19:37.832 "io_unit_size": 131072, 00:19:37.832 "max_aq_depth": 128, 00:19:37.832 "max_io_qpairs_per_ctrlr": 127, 00:19:37.832 "max_io_size": 131072, 00:19:37.832 "max_queue_depth": 128, 00:19:37.832 "num_shared_buffers": 511, 00:19:37.832 "sock_priority": 0, 00:19:37.832 "trtype": "TCP", 00:19:37.832 "zcopy": false 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "nvmf_create_subsystem", 00:19:37.832 "params": { 00:19:37.832 "allow_any_host": false, 00:19:37.832 "ana_reporting": false, 00:19:37.832 "max_cntlid": 65519, 00:19:37.832 "max_namespaces": 32, 00:19:37.832 "min_cntlid": 1, 00:19:37.832 "model_number": "SPDK bdev Controller", 00:19:37.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.832 "serial_number": "00000000000000000000" 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "nvmf_subsystem_add_host", 00:19:37.832 "params": { 00:19:37.832 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.832 "psk": "key0" 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "nvmf_subsystem_add_ns", 00:19:37.832 "params": { 00:19:37.832 "namespace": { 00:19:37.832 "bdev_name": "malloc0", 00:19:37.832 "nguid": "F13D1232956B481B80E364F87DCE82FB", 00:19:37.832 "no_auto_visible": false, 00:19:37.832 "nsid": 1, 00:19:37.832 "uuid": "f13d1232-956b-481b-80e3-64f87dce82fb" 00:19:37.832 }, 00:19:37.832 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:37.832 } 00:19:37.832 }, 00:19:37.832 { 00:19:37.832 "method": "nvmf_subsystem_add_listener", 00:19:37.832 "params": { 00:19:37.832 "listen_address": { 00:19:37.832 "adrfam": "IPv4", 00:19:37.832 "traddr": "10.0.0.2", 00:19:37.832 "trsvcid": "4420", 00:19:37.832 "trtype": "TCP" 00:19:37.832 }, 00:19:37.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.832 "secure_channel": false, 00:19:37.832 "sock_impl": "ssl" 00:19:37.832 } 00:19:37.832 } 00:19:37.832 ] 00:19:37.832 } 00:19:37.832 ] 00:19:37.832 }' 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85706 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85706 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85706 ']' 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.832 18:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.832 [2024-07-15 18:45:12.217432] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:37.832 [2024-07-15 18:45:12.217558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.091 [2024-07-15 18:45:12.357090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.091 [2024-07-15 18:45:12.457756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.091 [2024-07-15 18:45:12.457805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.091 [2024-07-15 18:45:12.457817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.091 [2024-07-15 18:45:12.457826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.091 [2024-07-15 18:45:12.457834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.091 [2024-07-15 18:45:12.457912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.348 [2024-07-15 18:45:12.676395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.348 [2024-07-15 18:45:12.708335] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.348 [2024-07-15 18:45:12.708564] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=85750 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 85750 /var/tmp/bdevperf.sock 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85750 ']' 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:38.912 18:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:38.912 "subsystems": [ 00:19:38.912 { 00:19:38.912 "subsystem": "keyring", 00:19:38.912 "config": [ 00:19:38.912 { 00:19:38.912 "method": "keyring_file_add_key", 00:19:38.912 "params": { 00:19:38.912 "name": "key0", 00:19:38.912 "path": "/tmp/tmp.ub1ZwOYebO" 00:19:38.912 } 00:19:38.912 } 00:19:38.912 ] 00:19:38.912 }, 00:19:38.912 { 00:19:38.912 "subsystem": "iobuf", 00:19:38.912 "config": [ 00:19:38.912 { 00:19:38.912 "method": "iobuf_set_options", 00:19:38.912 "params": { 00:19:38.912 "large_bufsize": 135168, 00:19:38.912 "large_pool_count": 1024, 00:19:38.912 "small_bufsize": 8192, 00:19:38.912 "small_pool_count": 8192 00:19:38.912 } 00:19:38.912 } 00:19:38.912 ] 00:19:38.912 }, 00:19:38.912 { 00:19:38.912 "subsystem": "sock", 00:19:38.912 "config": [ 00:19:38.912 { 00:19:38.912 "method": "sock_set_default_impl", 00:19:38.912 "params": { 00:19:38.912 "impl_name": "posix" 00:19:38.912 } 00:19:38.912 }, 00:19:38.912 { 00:19:38.912 "method": "sock_impl_set_options", 00:19:38.912 "params": { 00:19:38.912 "enable_ktls": false, 00:19:38.912 "enable_placement_id": 0, 00:19:38.912 "enable_quickack": false, 00:19:38.912 "enable_recv_pipe": true, 00:19:38.912 "enable_zerocopy_send_client": false, 00:19:38.912 "enable_zerocopy_send_server": true, 00:19:38.912 "impl_name": "ssl", 00:19:38.912 "recv_buf_size": 4096, 00:19:38.912 "send_buf_size": 4096, 00:19:38.912 "tls_version": 0, 00:19:38.912 "zerocopy_threshold": 0 00:19:38.912 } 00:19:38.912 }, 00:19:38.912 { 00:19:38.912 "method": "sock_impl_set_options", 00:19:38.912 "params": { 00:19:38.912 "enable_ktls": false, 00:19:38.912 "enable_placement_id": 0, 00:19:38.912 "enable_quickack": false, 00:19:38.913 "enable_recv_pipe": true, 00:19:38.913 "enable_zerocopy_send_client": false, 00:19:38.913 "enable_zerocopy_send_server": true, 00:19:38.913 "impl_name": "posix", 00:19:38.913 "recv_buf_size": 2097152, 00:19:38.913 "send_buf_size": 2097152, 00:19:38.913 "tls_version": 0, 00:19:38.913 "zerocopy_threshold": 0 00:19:38.913 } 00:19:38.913 } 00:19:38.913 ] 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "subsystem": "vmd", 00:19:38.913 "config": [] 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "subsystem": "accel", 00:19:38.913 "config": [ 00:19:38.913 { 00:19:38.913 "method": "accel_set_options", 00:19:38.913 "params": { 00:19:38.913 "buf_count": 2048, 00:19:38.913 "large_cache_size": 16, 00:19:38.913 "sequence_count": 2048, 00:19:38.913 "small_cache_size": 128, 00:19:38.913 "task_count": 2048 00:19:38.913 } 00:19:38.913 } 00:19:38.913 ] 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "subsystem": "bdev", 00:19:38.913 "config": [ 00:19:38.913 { 00:19:38.913 "method": "bdev_set_options", 00:19:38.913 "params": { 00:19:38.913 "bdev_auto_examine": true, 00:19:38.913 "bdev_io_cache_size": 256, 00:19:38.913 "bdev_io_pool_size": 65535, 00:19:38.913 "iobuf_large_cache_size": 16, 00:19:38.913 "iobuf_small_cache_size": 128 00:19:38.913 } 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "method": "bdev_raid_set_options", 00:19:38.913 "params": { 00:19:38.913 "process_window_size_kb": 1024 00:19:38.913 } 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "method": "bdev_iscsi_set_options", 00:19:38.913 "params": { 00:19:38.913 "timeout_sec": 30 00:19:38.913 } 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "method": "bdev_nvme_set_options", 00:19:38.913 "params": { 00:19:38.913 "action_on_timeout": "none", 00:19:38.913 "allow_accel_sequence": false, 00:19:38.913 "arbitration_burst": 0, 00:19:38.913 "bdev_retry_count": 3, 00:19:38.913 "ctrlr_loss_timeout_sec": 0, 00:19:38.913 "delay_cmd_submit": true, 00:19:38.913 "dhchap_dhgroups": [ 00:19:38.913 "null", 00:19:38.913 "ffdhe2048", 00:19:38.913 "ffdhe3072", 00:19:38.913 "ffdhe4096", 00:19:38.913 "ffdhe6144", 00:19:38.913 "ffdhe8192" 00:19:38.913 ], 00:19:38.913 "dhchap_digests": [ 00:19:38.913 "sha256", 00:19:38.913 "sha384", 00:19:38.913 "sha512" 00:19:38.913 ], 00:19:38.913 "disable_auto_failback": false, 00:19:38.913 "fast_io_fail_timeout_sec": 0, 00:19:38.913 "generate_uuids": false, 00:19:38.913 "high_priority_weight": 0, 00:19:38.913 "io_path_stat": false, 00:19:38.913 "io_queue_requests": 512, 00:19:38.913 "keep_alive_timeout_ms": 10000, 00:19:38.913 "low_priority_weight": 0, 00:19:38.913 "medium_priority_weight": 0, 00:19:38.913 "nvme_adminq_poll_period_us": 10000, 00:19:38.913 "nvme_error_stat": false, 00:19:38.913 "nvme_ioq_poll_period_us": 0, 00:19:38.913 "rdma_cm_event_timeout_ms": 0, 00:19:38.913 "rdma_max_cq_size": 0, 00:19:38.913 "rdma_srq_size": 0, 00:19:38.913 "reconnect_delay_sec": 0, 00:19:38.913 "timeout_admin_us": 0, 00:19:38.913 "timeout_us": 0, 00:19:38.913 "transport_ack_timeout": 0, 00:19:38.913 "transport_retry_count": 4, 00:19:38.913 "transport_tos": 0 00:19:38.913 } 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "method": "bdev_nvme_attach_controller", 00:19:38.913 "params": { 00:19:38.913 "adrfam": "IPv4", 00:19:38.913 "ctrlr_loss_timeout_sec": 0, 00:19:38.913 "ddgst": false, 00:19:38.913 "fast_io_fail_timeout_sec": 0, 00:19:38.913 "hdgst": false, 00:19:38.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.913 "name": "nvme0", 00:19:38.913 "prchk_guard": false, 00:19:38.913 "prchk_reftag": false, 00:19:38.913 "psk": "key0", 00:19:38.913 "reconnect_delay_sec": 0, 00:19:38.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.913 "traddr": "10.0.0.2", 00:19:38.913 "trsvcid": "4420", 00:19:38.913 "trtype": "TCP" 00:19:38.913 } 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "method": "bdev_nvme_set_hotplug", 00:19:38.913 "params": { 00:19:38.913 "enable": false, 00:19:38.913 "period_us": 100000 00:19:38.913 } 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "method": "bdev_enable_histogram", 00:19:38.913 "params": { 00:19:38.913 "enable": true, 00:19:38.913 "name": "nvme0n1" 00:19:38.913 } 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "method": "bdev_wait_for_examine" 00:19:38.913 } 00:19:38.913 ] 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "subsystem": "nbd", 00:19:38.913 "config": [] 00:19:38.913 } 00:19:38.913 ] 00:19:38.913 }' 00:19:38.913 [2024-07-15 18:45:13.239901] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:38.913 [2024-07-15 18:45:13.240035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85750 ] 00:19:38.913 [2024-07-15 18:45:13.387356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.169 [2024-07-15 18:45:13.505341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.426 [2024-07-15 18:45:13.664076] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.683 18:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.683 18:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:39.683 18:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:39.683 18:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:40.247 18:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.247 18:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.247 Running I/O for 1 seconds... 00:19:41.181 00:19:41.181 Latency(us) 00:19:41.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.181 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:41.181 Verification LBA range: start 0x0 length 0x2000 00:19:41.181 nvme0n1 : 1.01 4870.95 19.03 0.00 0.00 26044.78 5804.62 24591.60 00:19:41.181 =================================================================================================================== 00:19:41.181 Total : 4870.95 19.03 0.00 0.00 26044.78 5804.62 24591.60 00:19:41.181 0 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:41.181 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:41.181 nvmf_trace.0 00:19:41.438 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:41.438 18:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85750 00:19:41.438 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85750 ']' 00:19:41.438 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85750 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85750 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:41.439 killing process with pid 85750 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85750' 00:19:41.439 Received shutdown signal, test time was about 1.000000 seconds 00:19:41.439 00:19:41.439 Latency(us) 00:19:41.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.439 =================================================================================================================== 00:19:41.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85750 00:19:41.439 18:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85750 00:19:41.696 18:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:41.696 18:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.696 18:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:41.696 18:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.696 18:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:41.696 18:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.696 18:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.696 rmmod nvme_tcp 00:19:41.696 rmmod nvme_fabrics 00:19:41.696 rmmod nvme_keyring 00:19:41.696 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.696 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:41.696 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:41.696 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85706 ']' 00:19:41.696 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85706 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85706 ']' 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85706 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85706 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:41.697 killing process with pid 85706 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85706' 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85706 00:19:41.697 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85706 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.wNmruzjlGL /tmp/tmp.FXoClgbFDh /tmp/tmp.ub1ZwOYebO 00:19:41.955 00:19:41.955 real 1m26.581s 00:19:41.955 user 2m14.661s 00:19:41.955 sys 0m30.367s 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.955 ************************************ 00:19:41.955 END TEST nvmf_tls 00:19:41.955 18:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.955 ************************************ 00:19:41.955 18:45:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:41.955 18:45:16 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:41.955 18:45:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:41.955 18:45:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.955 18:45:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.955 ************************************ 00:19:41.955 START TEST nvmf_fips 00:19:41.955 ************************************ 00:19:41.955 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:42.213 * Looking for test storage... 00:19:42.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:42.213 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:42.214 Error setting digest 00:19:42.214 0052D9703C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:42.214 0052D9703C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:42.214 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:42.472 Cannot find device "nvmf_tgt_br" 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.472 Cannot find device "nvmf_tgt_br2" 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:42.472 Cannot find device "nvmf_tgt_br" 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:42.472 Cannot find device "nvmf_tgt_br2" 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:42.472 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:42.730 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:42.730 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:42.730 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:42.730 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:42.730 18:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:42.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:19:42.730 00:19:42.730 --- 10.0.0.2 ping statistics --- 00:19:42.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.730 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:42.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:42.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:19:42.730 00:19:42.730 --- 10.0.0.3 ping statistics --- 00:19:42.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.730 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:42.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:19:42.730 00:19:42.730 --- 10.0.0.1 ping statistics --- 00:19:42.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.730 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=86033 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 86033 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86033 ']' 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.730 18:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:42.730 [2024-07-15 18:45:17.146094] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:42.730 [2024-07-15 18:45:17.146192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.988 [2024-07-15 18:45:17.287287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.988 [2024-07-15 18:45:17.406935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.988 [2024-07-15 18:45:17.407014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.988 [2024-07-15 18:45:17.407029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.988 [2024-07-15 18:45:17.407042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.988 [2024-07-15 18:45:17.407053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.988 [2024-07-15 18:45:17.407097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:43.922 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.179 [2024-07-15 18:45:18.406096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.179 [2024-07-15 18:45:18.422045] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.179 [2024-07-15 18:45:18.422267] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.179 [2024-07-15 18:45:18.451563] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:44.179 malloc0 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=86085 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 86085 /var/tmp/bdevperf.sock 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86085 ']' 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.179 18:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.179 [2024-07-15 18:45:18.564990] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:44.179 [2024-07-15 18:45:18.565101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86085 ] 00:19:44.436 [2024-07-15 18:45:18.711320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.436 [2024-07-15 18:45:18.831477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.368 18:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.368 18:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:45.368 18:45:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:45.625 [2024-07-15 18:45:19.863760] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.625 [2024-07-15 18:45:19.863917] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:45.625 TLSTESTn1 00:19:45.625 18:45:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:45.625 Running I/O for 10 seconds... 00:19:57.816 00:19:57.816 Latency(us) 00:19:57.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.816 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:57.816 Verification LBA range: start 0x0 length 0x2000 00:19:57.816 TLSTESTn1 : 10.01 4705.72 18.38 0.00 0.00 27152.63 6241.52 20097.71 00:19:57.816 =================================================================================================================== 00:19:57.816 Total : 4705.72 18.38 0.00 0.00 27152.63 6241.52 20097.71 00:19:57.816 0 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:57.816 nvmf_trace.0 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86085 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86085 ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86085 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86085 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86085' 00:19:57.816 killing process with pid 86085 00:19:57.816 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.816 00:19:57.816 Latency(us) 00:19:57.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.816 =================================================================================================================== 00:19:57.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86085 00:19:57.816 [2024-07-15 18:45:30.290770] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86085 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.816 rmmod nvme_tcp 00:19:57.816 rmmod nvme_fabrics 00:19:57.816 rmmod nvme_keyring 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 86033 ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 86033 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86033 ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86033 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86033 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:57.816 killing process with pid 86033 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86033' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86033 00:19:57.816 [2024-07-15 18:45:30.599905] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86033 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:57.816 00:19:57.816 real 0m14.499s 00:19:57.816 user 0m19.294s 00:19:57.816 sys 0m6.206s 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.816 18:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.816 ************************************ 00:19:57.816 END TEST nvmf_fips 00:19:57.816 ************************************ 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:57.816 18:45:30 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:57.816 18:45:30 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:19:57.816 18:45:30 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.816 18:45:30 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.816 18:45:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:57.816 18:45:30 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.816 18:45:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.816 ************************************ 00:19:57.816 START TEST nvmf_multicontroller 00:19:57.816 ************************************ 00:19:57.816 18:45:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:57.816 * Looking for test storage... 00:19:57.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.816 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:57.817 Cannot find device "nvmf_tgt_br" 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.817 Cannot find device "nvmf_tgt_br2" 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:57.817 Cannot find device "nvmf_tgt_br" 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:57.817 Cannot find device "nvmf_tgt_br2" 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.817 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:57.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:19:57.818 00:19:57.818 --- 10.0.0.2 ping statistics --- 00:19:57.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.818 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:57.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:19:57.818 00:19:57.818 --- 10.0.0.3 ping statistics --- 00:19:57.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.818 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:19:57.818 00:19:57.818 --- 10.0.0.1 ping statistics --- 00:19:57.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.818 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=86450 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 86450 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86450 ']' 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.818 18:45:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:57.818 [2024-07-15 18:45:31.568329] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:57.818 [2024-07-15 18:45:31.568437] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.818 [2024-07-15 18:45:31.723538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:57.818 [2024-07-15 18:45:31.871116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.818 [2024-07-15 18:45:31.871184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.818 [2024-07-15 18:45:31.871200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.818 [2024-07-15 18:45:31.871213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.818 [2024-07-15 18:45:31.871224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.818 [2024-07-15 18:45:31.871528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.818 [2024-07-15 18:45:31.872436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.818 [2024-07-15 18:45:31.872460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 [2024-07-15 18:45:32.725098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 Malloc0 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 [2024-07-15 18:45:32.792213] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 [2024-07-15 18:45:32.804166] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 Malloc1 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.384 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86507 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86507 /var/tmp/bdevperf.sock 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86507 ']' 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.643 18:45:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.900 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.900 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:58.900 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:58.900 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.900 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 NVMe0n1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.165 1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 2024/07/15 18:45:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:59.165 request: 00:19:59.165 { 00:19:59.165 "method": "bdev_nvme_attach_controller", 00:19:59.165 "params": { 00:19:59.165 "name": "NVMe0", 00:19:59.165 "trtype": "tcp", 00:19:59.165 "traddr": "10.0.0.2", 00:19:59.165 "adrfam": "ipv4", 00:19:59.165 "trsvcid": "4420", 00:19:59.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.165 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:59.165 "hostaddr": "10.0.0.2", 00:19:59.165 "hostsvcid": "60000", 00:19:59.165 "prchk_reftag": false, 00:19:59.165 "prchk_guard": false, 00:19:59.165 "hdgst": false, 00:19:59.165 "ddgst": false 00:19:59.165 } 00:19:59.165 } 00:19:59.165 Got JSON-RPC error response 00:19:59.165 GoRPCClient: error on JSON-RPC call 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 2024/07/15 18:45:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:59.165 request: 00:19:59.165 { 00:19:59.165 "method": "bdev_nvme_attach_controller", 00:19:59.165 "params": { 00:19:59.165 "name": "NVMe0", 00:19:59.165 "trtype": "tcp", 00:19:59.165 "traddr": "10.0.0.2", 00:19:59.165 "adrfam": "ipv4", 00:19:59.165 "trsvcid": "4420", 00:19:59.165 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:59.165 "hostaddr": "10.0.0.2", 00:19:59.165 "hostsvcid": "60000", 00:19:59.165 "prchk_reftag": false, 00:19:59.165 "prchk_guard": false, 00:19:59.165 "hdgst": false, 00:19:59.165 "ddgst": false 00:19:59.165 } 00:19:59.165 } 00:19:59.165 Got JSON-RPC error response 00:19:59.165 GoRPCClient: error on JSON-RPC call 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 2024/07/15 18:45:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:59.165 request: 00:19:59.165 { 00:19:59.165 "method": "bdev_nvme_attach_controller", 00:19:59.165 "params": { 00:19:59.165 "name": "NVMe0", 00:19:59.165 "trtype": "tcp", 00:19:59.165 "traddr": "10.0.0.2", 00:19:59.165 "adrfam": "ipv4", 00:19:59.165 "trsvcid": "4420", 00:19:59.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.165 "hostaddr": "10.0.0.2", 00:19:59.165 "hostsvcid": "60000", 00:19:59.165 "prchk_reftag": false, 00:19:59.165 "prchk_guard": false, 00:19:59.165 "hdgst": false, 00:19:59.165 "ddgst": false, 00:19:59.165 "multipath": "disable" 00:19:59.165 } 00:19:59.165 } 00:19:59.165 Got JSON-RPC error response 00:19:59.165 GoRPCClient: error on JSON-RPC call 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 2024/07/15 18:45:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:59.165 request: 00:19:59.165 { 00:19:59.165 "method": "bdev_nvme_attach_controller", 00:19:59.165 "params": { 00:19:59.165 "name": "NVMe0", 00:19:59.165 "trtype": "tcp", 00:19:59.165 "traddr": "10.0.0.2", 00:19:59.165 "adrfam": "ipv4", 00:19:59.165 "trsvcid": "4420", 00:19:59.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.165 "hostaddr": "10.0.0.2", 00:19:59.165 "hostsvcid": "60000", 00:19:59.165 "prchk_reftag": false, 00:19:59.165 "prchk_guard": false, 00:19:59.165 "hdgst": false, 00:19:59.165 "ddgst": false, 00:19:59.165 "multipath": "failover" 00:19:59.165 } 00:19:59.165 } 00:19:59.165 Got JSON-RPC error response 00:19:59.165 GoRPCClient: error on JSON-RPC call 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.165 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.437 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:59.437 18:45:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:00.370 0 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86507 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86507 ']' 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86507 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.370 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86507 00:20:00.627 killing process with pid 86507 00:20:00.627 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.627 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.627 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86507' 00:20:00.627 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86507 00:20:00.627 18:45:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86507 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:00.627 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:00.900 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:00.900 [2024-07-15 18:45:32.923440] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:00.900 [2024-07-15 18:45:32.923645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86507 ] 00:20:00.900 [2024-07-15 18:45:33.073541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.900 [2024-07-15 18:45:33.193385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.900 [2024-07-15 18:45:33.645055] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 31adf1b4-5391-4f1c-9af8-19acb37a8706 already exists 00:20:00.900 [2024-07-15 18:45:33.645128] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:31adf1b4-5391-4f1c-9af8-19acb37a8706 alias for bdev NVMe1n1 00:20:00.900 [2024-07-15 18:45:33.645145] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:00.900 Running I/O for 1 seconds... 00:20:00.900 00:20:00.900 Latency(us) 00:20:00.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.900 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:00.900 NVMe0n1 : 1.00 21139.25 82.58 0.00 0.00 6039.55 3089.55 11796.48 00:20:00.900 =================================================================================================================== 00:20:00.900 Total : 21139.25 82.58 0.00 0.00 6039.55 3089.55 11796.48 00:20:00.900 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.900 00:20:00.900 Latency(us) 00:20:00.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.900 =================================================================================================================== 00:20:00.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.900 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.900 rmmod nvme_tcp 00:20:00.900 rmmod nvme_fabrics 00:20:00.900 rmmod nvme_keyring 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 86450 ']' 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 86450 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86450 ']' 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86450 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86450 00:20:00.900 killing process with pid 86450 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86450' 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86450 00:20:00.900 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86450 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:01.158 00:20:01.158 real 0m4.577s 00:20:01.158 user 0m13.382s 00:20:01.158 sys 0m1.286s 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.158 18:45:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 ************************************ 00:20:01.158 END TEST nvmf_multicontroller 00:20:01.158 ************************************ 00:20:01.158 18:45:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:01.158 18:45:35 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:01.158 18:45:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:01.158 18:45:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.158 18:45:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 ************************************ 00:20:01.158 START TEST nvmf_aer 00:20:01.158 ************************************ 00:20:01.158 18:45:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:01.416 * Looking for test storage... 00:20:01.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.416 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:01.417 Cannot find device "nvmf_tgt_br" 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.417 Cannot find device "nvmf_tgt_br2" 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:01.417 Cannot find device "nvmf_tgt_br" 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:01.417 Cannot find device "nvmf_tgt_br2" 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:01.417 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:01.674 18:45:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:01.674 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:01.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:20:01.675 00:20:01.675 --- 10.0.0.2 ping statistics --- 00:20:01.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.675 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:01.675 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.675 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:01.675 00:20:01.675 --- 10.0.0.3 ping statistics --- 00:20:01.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.675 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:01.675 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:20:01.933 00:20:01.933 --- 10.0.0.1 ping statistics --- 00:20:01.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.933 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86745 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86745 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86745 ']' 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.933 18:45:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 [2024-07-15 18:45:36.251566] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:01.933 [2024-07-15 18:45:36.251961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.933 [2024-07-15 18:45:36.398961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.192 [2024-07-15 18:45:36.582890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.192 [2024-07-15 18:45:36.583226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.192 [2024-07-15 18:45:36.583377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.192 [2024-07-15 18:45:36.583452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.192 [2024-07-15 18:45:36.583493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.192 [2024-07-15 18:45:36.583689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.192 [2024-07-15 18:45:36.583832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.192 [2024-07-15 18:45:36.584127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.192 [2024-07-15 18:45:36.584127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.128 [2024-07-15 18:45:37.314372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.128 Malloc0 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.128 [2024-07-15 18:45:37.400576] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.128 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.128 [ 00:20:03.128 { 00:20:03.128 "allow_any_host": true, 00:20:03.128 "hosts": [], 00:20:03.128 "listen_addresses": [], 00:20:03.128 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.128 "subtype": "Discovery" 00:20:03.128 }, 00:20:03.128 { 00:20:03.128 "allow_any_host": true, 00:20:03.128 "hosts": [], 00:20:03.128 "listen_addresses": [ 00:20:03.128 { 00:20:03.128 "adrfam": "IPv4", 00:20:03.128 "traddr": "10.0.0.2", 00:20:03.128 "trsvcid": "4420", 00:20:03.128 "trtype": "TCP" 00:20:03.128 } 00:20:03.128 ], 00:20:03.128 "max_cntlid": 65519, 00:20:03.128 "max_namespaces": 2, 00:20:03.128 "min_cntlid": 1, 00:20:03.128 "model_number": "SPDK bdev Controller", 00:20:03.128 "namespaces": [ 00:20:03.128 { 00:20:03.128 "bdev_name": "Malloc0", 00:20:03.128 "name": "Malloc0", 00:20:03.128 "nguid": "8986510914374857955F7322163B1A41", 00:20:03.128 "nsid": 1, 00:20:03.128 "uuid": "89865109-1437-4857-955f-7322163b1a41" 00:20:03.128 } 00:20:03.128 ], 00:20:03.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.128 "serial_number": "SPDK00000000000001", 00:20:03.128 "subtype": "NVMe" 00:20:03.128 } 00:20:03.128 ] 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86799 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:03.129 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.387 Malloc1 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.387 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.387 [ 00:20:03.387 { 00:20:03.387 "allow_any_host": true, 00:20:03.387 "hosts": [], 00:20:03.387 "listen_addresses": [], 00:20:03.387 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.387 "subtype": "Discovery" 00:20:03.387 Asynchronous Event Request test 00:20:03.387 Attaching to 10.0.0.2 00:20:03.387 Attached to 10.0.0.2 00:20:03.387 Registering asynchronous event callbacks... 00:20:03.387 Starting namespace attribute notice tests for all controllers... 00:20:03.387 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:03.387 aer_cb - Changed Namespace 00:20:03.387 Cleaning up... 00:20:03.387 }, 00:20:03.387 { 00:20:03.387 "allow_any_host": true, 00:20:03.387 "hosts": [], 00:20:03.387 "listen_addresses": [ 00:20:03.387 { 00:20:03.387 "adrfam": "IPv4", 00:20:03.387 "traddr": "10.0.0.2", 00:20:03.387 "trsvcid": "4420", 00:20:03.387 "trtype": "TCP" 00:20:03.387 } 00:20:03.387 ], 00:20:03.387 "max_cntlid": 65519, 00:20:03.387 "max_namespaces": 2, 00:20:03.387 "min_cntlid": 1, 00:20:03.387 "model_number": "SPDK bdev Controller", 00:20:03.387 "namespaces": [ 00:20:03.387 { 00:20:03.387 "bdev_name": "Malloc0", 00:20:03.387 "name": "Malloc0", 00:20:03.387 "nguid": "8986510914374857955F7322163B1A41", 00:20:03.387 "nsid": 1, 00:20:03.387 "uuid": "89865109-1437-4857-955f-7322163b1a41" 00:20:03.387 }, 00:20:03.387 { 00:20:03.387 "bdev_name": "Malloc1", 00:20:03.387 "name": "Malloc1", 00:20:03.387 "nguid": "CA93D8FB359D423ABC552F7E2712B975", 00:20:03.387 "nsid": 2, 00:20:03.387 "uuid": "ca93d8fb-359d-423a-bc55-2f7e2712b975" 00:20:03.387 } 00:20:03.387 ], 00:20:03.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.387 "serial_number": "SPDK00000000000001", 00:20:03.388 "subtype": "NVMe" 00:20:03.388 } 00:20:03.388 ] 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86799 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.388 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.646 rmmod nvme_tcp 00:20:03.646 rmmod nvme_fabrics 00:20:03.646 rmmod nvme_keyring 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86745 ']' 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86745 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86745 ']' 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86745 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.646 18:45:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86745 00:20:03.646 killing process with pid 86745 00:20:03.646 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:03.646 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:03.646 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86745' 00:20:03.646 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86745 00:20:03.646 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86745 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.905 18:45:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:04.163 00:20:04.163 real 0m2.795s 00:20:04.164 user 0m7.076s 00:20:04.164 sys 0m0.900s 00:20:04.164 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.164 18:45:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.164 ************************************ 00:20:04.164 END TEST nvmf_aer 00:20:04.164 ************************************ 00:20:04.164 18:45:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:04.164 18:45:38 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:04.164 18:45:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:04.164 18:45:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.164 18:45:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.164 ************************************ 00:20:04.164 START TEST nvmf_async_init 00:20:04.164 ************************************ 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:04.164 * Looking for test storage... 00:20:04.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e21186da39e547bcb279121b43ba771a 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.164 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:04.423 Cannot find device "nvmf_tgt_br" 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.423 Cannot find device "nvmf_tgt_br2" 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:04.423 Cannot find device "nvmf_tgt_br" 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:04.423 Cannot find device "nvmf_tgt_br2" 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.423 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:04.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:20:04.704 00:20:04.704 --- 10.0.0.2 ping statistics --- 00:20:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.704 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:04.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:04.704 00:20:04.704 --- 10.0.0.3 ping statistics --- 00:20:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.704 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:04.704 00:20:04.704 --- 10.0.0.1 ping statistics --- 00:20:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.704 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.704 18:45:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:04.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86974 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86974 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86974 ']' 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.704 18:45:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:04.704 [2024-07-15 18:45:39.099888] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:04.704 [2024-07-15 18:45:39.100017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.980 [2024-07-15 18:45:39.249734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.980 [2024-07-15 18:45:39.370548] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.980 [2024-07-15 18:45:39.370618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.980 [2024-07-15 18:45:39.370643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.980 [2024-07-15 18:45:39.370664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.980 [2024-07-15 18:45:39.370678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.980 [2024-07-15 18:45:39.370734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.915 [2024-07-15 18:45:40.191687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.915 null0 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e21186da39e547bcb279121b43ba771a 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.915 [2024-07-15 18:45:40.231804] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.915 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.173 nvme0n1 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.173 [ 00:20:06.173 { 00:20:06.173 "aliases": [ 00:20:06.173 "e21186da-39e5-47bc-b279-121b43ba771a" 00:20:06.173 ], 00:20:06.173 "assigned_rate_limits": { 00:20:06.173 "r_mbytes_per_sec": 0, 00:20:06.173 "rw_ios_per_sec": 0, 00:20:06.173 "rw_mbytes_per_sec": 0, 00:20:06.173 "w_mbytes_per_sec": 0 00:20:06.173 }, 00:20:06.173 "block_size": 512, 00:20:06.173 "claimed": false, 00:20:06.173 "driver_specific": { 00:20:06.173 "mp_policy": "active_passive", 00:20:06.173 "nvme": [ 00:20:06.173 { 00:20:06.173 "ctrlr_data": { 00:20:06.173 "ana_reporting": false, 00:20:06.173 "cntlid": 1, 00:20:06.173 "firmware_revision": "24.09", 00:20:06.173 "model_number": "SPDK bdev Controller", 00:20:06.173 "multi_ctrlr": true, 00:20:06.173 "oacs": { 00:20:06.173 "firmware": 0, 00:20:06.173 "format": 0, 00:20:06.173 "ns_manage": 0, 00:20:06.173 "security": 0 00:20:06.173 }, 00:20:06.173 "serial_number": "00000000000000000000", 00:20:06.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.173 "vendor_id": "0x8086" 00:20:06.173 }, 00:20:06.173 "ns_data": { 00:20:06.173 "can_share": true, 00:20:06.173 "id": 1 00:20:06.173 }, 00:20:06.173 "trid": { 00:20:06.173 "adrfam": "IPv4", 00:20:06.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.173 "traddr": "10.0.0.2", 00:20:06.173 "trsvcid": "4420", 00:20:06.173 "trtype": "TCP" 00:20:06.173 }, 00:20:06.173 "vs": { 00:20:06.173 "nvme_version": "1.3" 00:20:06.173 } 00:20:06.173 } 00:20:06.173 ] 00:20:06.173 }, 00:20:06.173 "memory_domains": [ 00:20:06.173 { 00:20:06.173 "dma_device_id": "system", 00:20:06.173 "dma_device_type": 1 00:20:06.173 } 00:20:06.173 ], 00:20:06.173 "name": "nvme0n1", 00:20:06.173 "num_blocks": 2097152, 00:20:06.173 "product_name": "NVMe disk", 00:20:06.173 "supported_io_types": { 00:20:06.173 "abort": true, 00:20:06.173 "compare": true, 00:20:06.173 "compare_and_write": true, 00:20:06.173 "copy": true, 00:20:06.173 "flush": true, 00:20:06.173 "get_zone_info": false, 00:20:06.173 "nvme_admin": true, 00:20:06.173 "nvme_io": true, 00:20:06.173 "nvme_io_md": false, 00:20:06.173 "nvme_iov_md": false, 00:20:06.173 "read": true, 00:20:06.173 "reset": true, 00:20:06.173 "seek_data": false, 00:20:06.173 "seek_hole": false, 00:20:06.173 "unmap": false, 00:20:06.173 "write": true, 00:20:06.173 "write_zeroes": true, 00:20:06.173 "zcopy": false, 00:20:06.173 "zone_append": false, 00:20:06.173 "zone_management": false 00:20:06.173 }, 00:20:06.173 "uuid": "e21186da-39e5-47bc-b279-121b43ba771a", 00:20:06.173 "zoned": false 00:20:06.173 } 00:20:06.173 ] 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.173 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.173 [2024-07-15 18:45:40.512429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.173 [2024-07-15 18:45:40.512765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f3a30 (9): Bad file descriptor 00:20:06.173 [2024-07-15 18:45:40.655148] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 [ 00:20:06.432 { 00:20:06.432 "aliases": [ 00:20:06.432 "e21186da-39e5-47bc-b279-121b43ba771a" 00:20:06.432 ], 00:20:06.432 "assigned_rate_limits": { 00:20:06.432 "r_mbytes_per_sec": 0, 00:20:06.432 "rw_ios_per_sec": 0, 00:20:06.432 "rw_mbytes_per_sec": 0, 00:20:06.432 "w_mbytes_per_sec": 0 00:20:06.432 }, 00:20:06.432 "block_size": 512, 00:20:06.432 "claimed": false, 00:20:06.432 "driver_specific": { 00:20:06.432 "mp_policy": "active_passive", 00:20:06.432 "nvme": [ 00:20:06.432 { 00:20:06.432 "ctrlr_data": { 00:20:06.432 "ana_reporting": false, 00:20:06.432 "cntlid": 2, 00:20:06.432 "firmware_revision": "24.09", 00:20:06.432 "model_number": "SPDK bdev Controller", 00:20:06.432 "multi_ctrlr": true, 00:20:06.432 "oacs": { 00:20:06.432 "firmware": 0, 00:20:06.432 "format": 0, 00:20:06.432 "ns_manage": 0, 00:20:06.432 "security": 0 00:20:06.432 }, 00:20:06.432 "serial_number": "00000000000000000000", 00:20:06.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.432 "vendor_id": "0x8086" 00:20:06.432 }, 00:20:06.432 "ns_data": { 00:20:06.432 "can_share": true, 00:20:06.432 "id": 1 00:20:06.432 }, 00:20:06.432 "trid": { 00:20:06.432 "adrfam": "IPv4", 00:20:06.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.432 "traddr": "10.0.0.2", 00:20:06.432 "trsvcid": "4420", 00:20:06.432 "trtype": "TCP" 00:20:06.432 }, 00:20:06.432 "vs": { 00:20:06.432 "nvme_version": "1.3" 00:20:06.432 } 00:20:06.432 } 00:20:06.432 ] 00:20:06.432 }, 00:20:06.432 "memory_domains": [ 00:20:06.432 { 00:20:06.432 "dma_device_id": "system", 00:20:06.432 "dma_device_type": 1 00:20:06.432 } 00:20:06.432 ], 00:20:06.432 "name": "nvme0n1", 00:20:06.432 "num_blocks": 2097152, 00:20:06.432 "product_name": "NVMe disk", 00:20:06.432 "supported_io_types": { 00:20:06.432 "abort": true, 00:20:06.432 "compare": true, 00:20:06.432 "compare_and_write": true, 00:20:06.432 "copy": true, 00:20:06.432 "flush": true, 00:20:06.432 "get_zone_info": false, 00:20:06.432 "nvme_admin": true, 00:20:06.432 "nvme_io": true, 00:20:06.432 "nvme_io_md": false, 00:20:06.432 "nvme_iov_md": false, 00:20:06.432 "read": true, 00:20:06.432 "reset": true, 00:20:06.432 "seek_data": false, 00:20:06.432 "seek_hole": false, 00:20:06.432 "unmap": false, 00:20:06.432 "write": true, 00:20:06.432 "write_zeroes": true, 00:20:06.432 "zcopy": false, 00:20:06.432 "zone_append": false, 00:20:06.432 "zone_management": false 00:20:06.432 }, 00:20:06.432 "uuid": "e21186da-39e5-47bc-b279-121b43ba771a", 00:20:06.432 "zoned": false 00:20:06.432 } 00:20:06.432 ] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.SZYL9fURkw 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.SZYL9fURkw 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 [2024-07-15 18:45:40.732713] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.432 [2024-07-15 18:45:40.732905] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZYL9fURkw 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 [2024-07-15 18:45:40.740731] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SZYL9fURkw 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 [2024-07-15 18:45:40.748716] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.432 [2024-07-15 18:45:40.748836] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:06.432 nvme0n1 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 [ 00:20:06.432 { 00:20:06.432 "aliases": [ 00:20:06.432 "e21186da-39e5-47bc-b279-121b43ba771a" 00:20:06.432 ], 00:20:06.432 "assigned_rate_limits": { 00:20:06.432 "r_mbytes_per_sec": 0, 00:20:06.432 "rw_ios_per_sec": 0, 00:20:06.432 "rw_mbytes_per_sec": 0, 00:20:06.432 "w_mbytes_per_sec": 0 00:20:06.432 }, 00:20:06.432 "block_size": 512, 00:20:06.432 "claimed": false, 00:20:06.432 "driver_specific": { 00:20:06.432 "mp_policy": "active_passive", 00:20:06.432 "nvme": [ 00:20:06.432 { 00:20:06.432 "ctrlr_data": { 00:20:06.432 "ana_reporting": false, 00:20:06.432 "cntlid": 3, 00:20:06.432 "firmware_revision": "24.09", 00:20:06.432 "model_number": "SPDK bdev Controller", 00:20:06.432 "multi_ctrlr": true, 00:20:06.432 "oacs": { 00:20:06.432 "firmware": 0, 00:20:06.432 "format": 0, 00:20:06.432 "ns_manage": 0, 00:20:06.432 "security": 0 00:20:06.432 }, 00:20:06.432 "serial_number": "00000000000000000000", 00:20:06.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.432 "vendor_id": "0x8086" 00:20:06.432 }, 00:20:06.432 "ns_data": { 00:20:06.432 "can_share": true, 00:20:06.432 "id": 1 00:20:06.432 }, 00:20:06.432 "trid": { 00:20:06.432 "adrfam": "IPv4", 00:20:06.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.432 "traddr": "10.0.0.2", 00:20:06.432 "trsvcid": "4421", 00:20:06.432 "trtype": "TCP" 00:20:06.432 }, 00:20:06.432 "vs": { 00:20:06.432 "nvme_version": "1.3" 00:20:06.432 } 00:20:06.432 } 00:20:06.432 ] 00:20:06.432 }, 00:20:06.432 "memory_domains": [ 00:20:06.432 { 00:20:06.432 "dma_device_id": "system", 00:20:06.432 "dma_device_type": 1 00:20:06.432 } 00:20:06.432 ], 00:20:06.432 "name": "nvme0n1", 00:20:06.432 "num_blocks": 2097152, 00:20:06.432 "product_name": "NVMe disk", 00:20:06.432 "supported_io_types": { 00:20:06.432 "abort": true, 00:20:06.432 "compare": true, 00:20:06.432 "compare_and_write": true, 00:20:06.432 "copy": true, 00:20:06.432 "flush": true, 00:20:06.432 "get_zone_info": false, 00:20:06.432 "nvme_admin": true, 00:20:06.432 "nvme_io": true, 00:20:06.432 "nvme_io_md": false, 00:20:06.432 "nvme_iov_md": false, 00:20:06.432 "read": true, 00:20:06.432 "reset": true, 00:20:06.432 "seek_data": false, 00:20:06.432 "seek_hole": false, 00:20:06.432 "unmap": false, 00:20:06.432 "write": true, 00:20:06.432 "write_zeroes": true, 00:20:06.432 "zcopy": false, 00:20:06.432 "zone_append": false, 00:20:06.432 "zone_management": false 00:20:06.432 }, 00:20:06.432 "uuid": "e21186da-39e5-47bc-b279-121b43ba771a", 00:20:06.432 "zoned": false 00:20:06.432 } 00:20:06.432 ] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.SZYL9fURkw 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.432 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.690 rmmod nvme_tcp 00:20:06.690 rmmod nvme_fabrics 00:20:06.690 rmmod nvme_keyring 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86974 ']' 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86974 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86974 ']' 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86974 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86974 00:20:06.690 killing process with pid 86974 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86974' 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86974 00:20:06.690 [2024-07-15 18:45:40.999531] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:06.690 [2024-07-15 18:45:40.999579] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:06.690 18:45:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86974 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:06.947 00:20:06.947 real 0m2.747s 00:20:06.947 user 0m2.594s 00:20:06.947 sys 0m0.695s 00:20:06.947 ************************************ 00:20:06.947 END TEST nvmf_async_init 00:20:06.947 ************************************ 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.947 18:45:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.947 18:45:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:06.947 18:45:41 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:06.947 18:45:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:06.947 18:45:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.947 18:45:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:06.947 ************************************ 00:20:06.947 START TEST dma 00:20:06.947 ************************************ 00:20:06.947 18:45:41 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:06.947 * Looking for test storage... 00:20:06.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.947 18:45:41 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.947 18:45:41 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.947 18:45:41 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.947 18:45:41 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.947 18:45:41 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.947 18:45:41 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.947 18:45:41 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.947 18:45:41 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:06.947 18:45:41 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.947 18:45:41 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.947 18:45:41 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:06.947 18:45:41 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:06.947 00:20:06.947 real 0m0.111s 00:20:06.947 user 0m0.058s 00:20:06.947 sys 0m0.063s 00:20:06.947 18:45:41 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.947 ************************************ 00:20:06.947 END TEST dma 00:20:06.947 ************************************ 00:20:06.947 18:45:41 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:07.205 18:45:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:07.205 18:45:41 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:07.205 18:45:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:07.205 18:45:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.205 18:45:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.205 ************************************ 00:20:07.205 START TEST nvmf_identify 00:20:07.205 ************************************ 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:07.205 * Looking for test storage... 00:20:07.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.205 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:07.206 Cannot find device "nvmf_tgt_br" 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.206 Cannot find device "nvmf_tgt_br2" 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:07.206 Cannot find device "nvmf_tgt_br" 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:07.206 Cannot find device "nvmf_tgt_br2" 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:20:07.206 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.462 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:07.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:07.719 00:20:07.719 --- 10.0.0.2 ping statistics --- 00:20:07.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.719 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:07.719 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.719 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:20:07.719 00:20:07.719 --- 10.0.0.3 ping statistics --- 00:20:07.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.719 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:20:07.719 00:20:07.719 --- 10.0.0.1 ping statistics --- 00:20:07.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.719 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:07.719 18:45:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87239 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87239 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 87239 ']' 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.719 18:45:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:07.719 [2024-07-15 18:45:42.081053] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:07.720 [2024-07-15 18:45:42.081173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.017 [2024-07-15 18:45:42.227579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.017 [2024-07-15 18:45:42.349527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.018 [2024-07-15 18:45:42.349792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.018 [2024-07-15 18:45:42.349885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.018 [2024-07-15 18:45:42.349903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.018 [2024-07-15 18:45:42.349914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.018 [2024-07-15 18:45:42.350150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.018 [2024-07-15 18:45:42.350332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.018 [2024-07-15 18:45:42.351037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.018 [2024-07-15 18:45:42.351044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.582 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.582 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:08.582 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.582 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.582 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.839 [2024-07-15 18:45:43.072250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.839 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.839 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:08.839 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.839 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.839 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:08.839 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.840 Malloc0 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.840 [2024-07-15 18:45:43.196556] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.840 [ 00:20:08.840 { 00:20:08.840 "allow_any_host": true, 00:20:08.840 "hosts": [], 00:20:08.840 "listen_addresses": [ 00:20:08.840 { 00:20:08.840 "adrfam": "IPv4", 00:20:08.840 "traddr": "10.0.0.2", 00:20:08.840 "trsvcid": "4420", 00:20:08.840 "trtype": "TCP" 00:20:08.840 } 00:20:08.840 ], 00:20:08.840 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:08.840 "subtype": "Discovery" 00:20:08.840 }, 00:20:08.840 { 00:20:08.840 "allow_any_host": true, 00:20:08.840 "hosts": [], 00:20:08.840 "listen_addresses": [ 00:20:08.840 { 00:20:08.840 "adrfam": "IPv4", 00:20:08.840 "traddr": "10.0.0.2", 00:20:08.840 "trsvcid": "4420", 00:20:08.840 "trtype": "TCP" 00:20:08.840 } 00:20:08.840 ], 00:20:08.840 "max_cntlid": 65519, 00:20:08.840 "max_namespaces": 32, 00:20:08.840 "min_cntlid": 1, 00:20:08.840 "model_number": "SPDK bdev Controller", 00:20:08.840 "namespaces": [ 00:20:08.840 { 00:20:08.840 "bdev_name": "Malloc0", 00:20:08.840 "eui64": "ABCDEF0123456789", 00:20:08.840 "name": "Malloc0", 00:20:08.840 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:08.840 "nsid": 1, 00:20:08.840 "uuid": "e5203ba7-d491-4b0c-af7f-4877aa960a51" 00:20:08.840 } 00:20:08.840 ], 00:20:08.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.840 "serial_number": "SPDK00000000000001", 00:20:08.840 "subtype": "NVMe" 00:20:08.840 } 00:20:08.840 ] 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.840 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:08.840 [2024-07-15 18:45:43.261454] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:08.840 [2024-07-15 18:45:43.261535] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87298 ] 00:20:09.101 [2024-07-15 18:45:43.413419] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:09.101 [2024-07-15 18:45:43.413517] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:09.101 [2024-07-15 18:45:43.413524] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:09.101 [2024-07-15 18:45:43.413540] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:09.101 [2024-07-15 18:45:43.413548] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:09.101 [2024-07-15 18:45:43.413691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:09.101 [2024-07-15 18:45:43.413739] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x205da60 0 00:20:09.101 [2024-07-15 18:45:43.418006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:09.101 [2024-07-15 18:45:43.418040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:09.101 [2024-07-15 18:45:43.418048] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:09.101 [2024-07-15 18:45:43.418054] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:09.101 [2024-07-15 18:45:43.418119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.418128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.418134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.418152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:09.101 [2024-07-15 18:45:43.418192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.425973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.426004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.426011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.426039] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:09.101 [2024-07-15 18:45:43.426051] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:09.101 [2024-07-15 18:45:43.426060] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:09.101 [2024-07-15 18:45:43.426086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.426113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 18:45:43.426154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.426278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.426286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.426291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.426303] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:09.101 [2024-07-15 18:45:43.426311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:09.101 [2024-07-15 18:45:43.426320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.426337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 18:45:43.426357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.426412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.426419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.426423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.426434] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:09.101 [2024-07-15 18:45:43.426444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:09.101 [2024-07-15 18:45:43.426452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.426468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 18:45:43.426485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.426536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.426543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.426548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.426558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:09.101 [2024-07-15 18:45:43.426569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.426586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 18:45:43.426602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.426654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.426661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.426665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.426676] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:09.101 [2024-07-15 18:45:43.426682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:09.101 [2024-07-15 18:45:43.426691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:09.101 [2024-07-15 18:45:43.426797] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:09.101 [2024-07-15 18:45:43.426804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:09.101 [2024-07-15 18:45:43.426814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.426830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 18:45:43.426847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.426898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.426906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.426910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.426920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:09.101 [2024-07-15 18:45:43.426931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.426940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.426959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 18:45:43.426977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.427030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.427037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.427041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.427046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.427051] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:09.101 [2024-07-15 18:45:43.427057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:09.101 [2024-07-15 18:45:43.427066] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:09.101 [2024-07-15 18:45:43.427079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:09.101 [2024-07-15 18:45:43.427091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.427096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.101 [2024-07-15 18:45:43.427104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 18:45:43.427121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.101 [2024-07-15 18:45:43.427210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.101 [2024-07-15 18:45:43.427222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.101 [2024-07-15 18:45:43.427230] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.427238] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205da60): datao=0, datal=4096, cccid=0 00:20:09.101 [2024-07-15 18:45:43.427248] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a0840) on tqpair(0x205da60): expected_datao=0, payload_size=4096 00:20:09.101 [2024-07-15 18:45:43.427258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.427271] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.427280] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.427291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.101 [2024-07-15 18:45:43.427298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.101 [2024-07-15 18:45:43.427303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.101 [2024-07-15 18:45:43.427307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.101 [2024-07-15 18:45:43.427319] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:09.101 [2024-07-15 18:45:43.427326] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:09.101 [2024-07-15 18:45:43.427331] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:09.101 [2024-07-15 18:45:43.427338] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:09.101 [2024-07-15 18:45:43.427344] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:09.101 [2024-07-15 18:45:43.427350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:09.101 [2024-07-15 18:45:43.427360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:09.101 [2024-07-15 18:45:43.427369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.427386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.102 [2024-07-15 18:45:43.427407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.102 [2024-07-15 18:45:43.427462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.102 [2024-07-15 18:45:43.427468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.102 [2024-07-15 18:45:43.427473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.102 [2024-07-15 18:45:43.427486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.427502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.102 [2024-07-15 18:45:43.427510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.427526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.102 [2024-07-15 18:45:43.427533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.427549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.102 [2024-07-15 18:45:43.427556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.427572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.102 [2024-07-15 18:45:43.427578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:09.102 [2024-07-15 18:45:43.427592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:09.102 [2024-07-15 18:45:43.427600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.427612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.102 [2024-07-15 18:45:43.427630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0840, cid 0, qid 0 00:20:09.102 [2024-07-15 18:45:43.427637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a09c0, cid 1, qid 0 00:20:09.102 [2024-07-15 18:45:43.427642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0b40, cid 2, qid 0 00:20:09.102 [2024-07-15 18:45:43.427648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.102 [2024-07-15 18:45:43.427654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0e40, cid 4, qid 0 00:20:09.102 [2024-07-15 18:45:43.427739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.102 [2024-07-15 18:45:43.427746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.102 [2024-07-15 18:45:43.427751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0e40) on tqpair=0x205da60 00:20:09.102 [2024-07-15 18:45:43.427762] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:09.102 [2024-07-15 18:45:43.427772] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:09.102 [2024-07-15 18:45:43.427784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.427796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.102 [2024-07-15 18:45:43.427813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0e40, cid 4, qid 0 00:20:09.102 [2024-07-15 18:45:43.427872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.102 [2024-07-15 18:45:43.427879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.102 [2024-07-15 18:45:43.427883] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427888] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205da60): datao=0, datal=4096, cccid=4 00:20:09.102 [2024-07-15 18:45:43.427893] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a0e40) on tqpair(0x205da60): expected_datao=0, payload_size=4096 00:20:09.102 [2024-07-15 18:45:43.427899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427907] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.102 [2024-07-15 18:45:43.427927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.102 [2024-07-15 18:45:43.427931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.427936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0e40) on tqpair=0x205da60 00:20:09.102 [2024-07-15 18:45:43.427963] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:09.102 [2024-07-15 18:45:43.427997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.428010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.102 [2024-07-15 18:45:43.428019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.428035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.102 [2024-07-15 18:45:43.428058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0e40, cid 4, qid 0 00:20:09.102 [2024-07-15 18:45:43.428064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0fc0, cid 5, qid 0 00:20:09.102 [2024-07-15 18:45:43.428205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.102 [2024-07-15 18:45:43.428221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.102 [2024-07-15 18:45:43.428228] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428232] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205da60): datao=0, datal=1024, cccid=4 00:20:09.102 [2024-07-15 18:45:43.428238] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a0e40) on tqpair(0x205da60): expected_datao=0, payload_size=1024 00:20:09.102 [2024-07-15 18:45:43.428244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428251] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428256] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.102 [2024-07-15 18:45:43.428269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.102 [2024-07-15 18:45:43.428273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.428278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0fc0) on tqpair=0x205da60 00:20:09.102 [2024-07-15 18:45:43.469088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.102 [2024-07-15 18:45:43.469131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.102 [2024-07-15 18:45:43.469138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0e40) on tqpair=0x205da60 00:20:09.102 [2024-07-15 18:45:43.469183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.469205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.102 [2024-07-15 18:45:43.469251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0e40, cid 4, qid 0 00:20:09.102 [2024-07-15 18:45:43.469377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.102 [2024-07-15 18:45:43.469384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.102 [2024-07-15 18:45:43.469389] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469394] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205da60): datao=0, datal=3072, cccid=4 00:20:09.102 [2024-07-15 18:45:43.469400] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a0e40) on tqpair(0x205da60): expected_datao=0, payload_size=3072 00:20:09.102 [2024-07-15 18:45:43.469413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469422] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469427] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.102 [2024-07-15 18:45:43.469443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.102 [2024-07-15 18:45:43.469448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0e40) on tqpair=0x205da60 00:20:09.102 [2024-07-15 18:45:43.469463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205da60) 00:20:09.102 [2024-07-15 18:45:43.469475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.102 [2024-07-15 18:45:43.469499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0e40, cid 4, qid 0 00:20:09.102 [2024-07-15 18:45:43.469565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.102 [2024-07-15 18:45:43.469572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.102 [2024-07-15 18:45:43.469576] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469580] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205da60): datao=0, datal=8, cccid=4 00:20:09.102 [2024-07-15 18:45:43.469586] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a0e40) on tqpair(0x205da60): expected_datao=0, payload_size=8 00:20:09.102 [2024-07-15 18:45:43.469592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469599] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.469603] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.513992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.102 [2024-07-15 18:45:43.514028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.102 [2024-07-15 18:45:43.514034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.102 [2024-07-15 18:45:43.514040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0e40) on tqpair=0x205da60 00:20:09.102 ===================================================== 00:20:09.102 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:09.102 ===================================================== 00:20:09.102 Controller Capabilities/Features 00:20:09.102 ================================ 00:20:09.102 Vendor ID: 0000 00:20:09.102 Subsystem Vendor ID: 0000 00:20:09.102 Serial Number: .................... 00:20:09.102 Model Number: ........................................ 00:20:09.102 Firmware Version: 24.09 00:20:09.102 Recommended Arb Burst: 0 00:20:09.102 IEEE OUI Identifier: 00 00 00 00:20:09.102 Multi-path I/O 00:20:09.102 May have multiple subsystem ports: No 00:20:09.102 May have multiple controllers: No 00:20:09.102 Associated with SR-IOV VF: No 00:20:09.102 Max Data Transfer Size: 131072 00:20:09.102 Max Number of Namespaces: 0 00:20:09.102 Max Number of I/O Queues: 1024 00:20:09.102 NVMe Specification Version (VS): 1.3 00:20:09.102 NVMe Specification Version (Identify): 1.3 00:20:09.102 Maximum Queue Entries: 128 00:20:09.102 Contiguous Queues Required: Yes 00:20:09.102 Arbitration Mechanisms Supported 00:20:09.102 Weighted Round Robin: Not Supported 00:20:09.102 Vendor Specific: Not Supported 00:20:09.102 Reset Timeout: 15000 ms 00:20:09.102 Doorbell Stride: 4 bytes 00:20:09.102 NVM Subsystem Reset: Not Supported 00:20:09.102 Command Sets Supported 00:20:09.102 NVM Command Set: Supported 00:20:09.102 Boot Partition: Not Supported 00:20:09.102 Memory Page Size Minimum: 4096 bytes 00:20:09.102 Memory Page Size Maximum: 4096 bytes 00:20:09.102 Persistent Memory Region: Not Supported 00:20:09.102 Optional Asynchronous Events Supported 00:20:09.102 Namespace Attribute Notices: Not Supported 00:20:09.102 Firmware Activation Notices: Not Supported 00:20:09.102 ANA Change Notices: Not Supported 00:20:09.102 PLE Aggregate Log Change Notices: Not Supported 00:20:09.102 LBA Status Info Alert Notices: Not Supported 00:20:09.102 EGE Aggregate Log Change Notices: Not Supported 00:20:09.102 Normal NVM Subsystem Shutdown event: Not Supported 00:20:09.102 Zone Descriptor Change Notices: Not Supported 00:20:09.102 Discovery Log Change Notices: Supported 00:20:09.102 Controller Attributes 00:20:09.102 128-bit Host Identifier: Not Supported 00:20:09.102 Non-Operational Permissive Mode: Not Supported 00:20:09.102 NVM Sets: Not Supported 00:20:09.102 Read Recovery Levels: Not Supported 00:20:09.102 Endurance Groups: Not Supported 00:20:09.102 Predictable Latency Mode: Not Supported 00:20:09.102 Traffic Based Keep ALive: Not Supported 00:20:09.102 Namespace Granularity: Not Supported 00:20:09.102 SQ Associations: Not Supported 00:20:09.102 UUID List: Not Supported 00:20:09.103 Multi-Domain Subsystem: Not Supported 00:20:09.103 Fixed Capacity Management: Not Supported 00:20:09.103 Variable Capacity Management: Not Supported 00:20:09.103 Delete Endurance Group: Not Supported 00:20:09.103 Delete NVM Set: Not Supported 00:20:09.103 Extended LBA Formats Supported: Not Supported 00:20:09.103 Flexible Data Placement Supported: Not Supported 00:20:09.103 00:20:09.103 Controller Memory Buffer Support 00:20:09.103 ================================ 00:20:09.103 Supported: No 00:20:09.103 00:20:09.103 Persistent Memory Region Support 00:20:09.103 ================================ 00:20:09.103 Supported: No 00:20:09.103 00:20:09.103 Admin Command Set Attributes 00:20:09.103 ============================ 00:20:09.103 Security Send/Receive: Not Supported 00:20:09.103 Format NVM: Not Supported 00:20:09.103 Firmware Activate/Download: Not Supported 00:20:09.103 Namespace Management: Not Supported 00:20:09.103 Device Self-Test: Not Supported 00:20:09.103 Directives: Not Supported 00:20:09.103 NVMe-MI: Not Supported 00:20:09.103 Virtualization Management: Not Supported 00:20:09.103 Doorbell Buffer Config: Not Supported 00:20:09.103 Get LBA Status Capability: Not Supported 00:20:09.103 Command & Feature Lockdown Capability: Not Supported 00:20:09.103 Abort Command Limit: 1 00:20:09.103 Async Event Request Limit: 4 00:20:09.103 Number of Firmware Slots: N/A 00:20:09.103 Firmware Slot 1 Read-Only: N/A 00:20:09.103 Firmware Activation Without Reset: N/A 00:20:09.103 Multiple Update Detection Support: N/A 00:20:09.103 Firmware Update Granularity: No Information Provided 00:20:09.103 Per-Namespace SMART Log: No 00:20:09.103 Asymmetric Namespace Access Log Page: Not Supported 00:20:09.103 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:09.103 Command Effects Log Page: Not Supported 00:20:09.103 Get Log Page Extended Data: Supported 00:20:09.103 Telemetry Log Pages: Not Supported 00:20:09.103 Persistent Event Log Pages: Not Supported 00:20:09.103 Supported Log Pages Log Page: May Support 00:20:09.103 Commands Supported & Effects Log Page: Not Supported 00:20:09.103 Feature Identifiers & Effects Log Page:May Support 00:20:09.103 NVMe-MI Commands & Effects Log Page: May Support 00:20:09.103 Data Area 4 for Telemetry Log: Not Supported 00:20:09.103 Error Log Page Entries Supported: 128 00:20:09.103 Keep Alive: Not Supported 00:20:09.103 00:20:09.103 NVM Command Set Attributes 00:20:09.103 ========================== 00:20:09.103 Submission Queue Entry Size 00:20:09.103 Max: 1 00:20:09.103 Min: 1 00:20:09.103 Completion Queue Entry Size 00:20:09.103 Max: 1 00:20:09.103 Min: 1 00:20:09.103 Number of Namespaces: 0 00:20:09.103 Compare Command: Not Supported 00:20:09.103 Write Uncorrectable Command: Not Supported 00:20:09.103 Dataset Management Command: Not Supported 00:20:09.103 Write Zeroes Command: Not Supported 00:20:09.103 Set Features Save Field: Not Supported 00:20:09.103 Reservations: Not Supported 00:20:09.103 Timestamp: Not Supported 00:20:09.103 Copy: Not Supported 00:20:09.103 Volatile Write Cache: Not Present 00:20:09.103 Atomic Write Unit (Normal): 1 00:20:09.103 Atomic Write Unit (PFail): 1 00:20:09.103 Atomic Compare & Write Unit: 1 00:20:09.103 Fused Compare & Write: Supported 00:20:09.103 Scatter-Gather List 00:20:09.103 SGL Command Set: Supported 00:20:09.103 SGL Keyed: Supported 00:20:09.103 SGL Bit Bucket Descriptor: Not Supported 00:20:09.103 SGL Metadata Pointer: Not Supported 00:20:09.103 Oversized SGL: Not Supported 00:20:09.103 SGL Metadata Address: Not Supported 00:20:09.103 SGL Offset: Supported 00:20:09.103 Transport SGL Data Block: Not Supported 00:20:09.103 Replay Protected Memory Block: Not Supported 00:20:09.103 00:20:09.103 Firmware Slot Information 00:20:09.103 ========================= 00:20:09.103 Active slot: 0 00:20:09.103 00:20:09.103 00:20:09.103 Error Log 00:20:09.103 ========= 00:20:09.103 00:20:09.103 Active Namespaces 00:20:09.103 ================= 00:20:09.103 Discovery Log Page 00:20:09.103 ================== 00:20:09.103 Generation Counter: 2 00:20:09.103 Number of Records: 2 00:20:09.103 Record Format: 0 00:20:09.103 00:20:09.103 Discovery Log Entry 0 00:20:09.103 ---------------------- 00:20:09.103 Transport Type: 3 (TCP) 00:20:09.103 Address Family: 1 (IPv4) 00:20:09.103 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:09.103 Entry Flags: 00:20:09.103 Duplicate Returned Information: 1 00:20:09.103 Explicit Persistent Connection Support for Discovery: 1 00:20:09.103 Transport Requirements: 00:20:09.103 Secure Channel: Not Required 00:20:09.103 Port ID: 0 (0x0000) 00:20:09.103 Controller ID: 65535 (0xffff) 00:20:09.103 Admin Max SQ Size: 128 00:20:09.103 Transport Service Identifier: 4420 00:20:09.103 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:09.103 Transport Address: 10.0.0.2 00:20:09.103 Discovery Log Entry 1 00:20:09.103 ---------------------- 00:20:09.103 Transport Type: 3 (TCP) 00:20:09.103 Address Family: 1 (IPv4) 00:20:09.103 Subsystem Type: 2 (NVM Subsystem) 00:20:09.103 Entry Flags: 00:20:09.103 Duplicate Returned Information: 0 00:20:09.103 Explicit Persistent Connection Support for Discovery: 0 00:20:09.103 Transport Requirements: 00:20:09.103 Secure Channel: Not Required 00:20:09.103 Port ID: 0 (0x0000) 00:20:09.103 Controller ID: 65535 (0xffff) 00:20:09.103 Admin Max SQ Size: 128 00:20:09.103 Transport Service Identifier: 4420 00:20:09.103 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:09.103 Transport Address: 10.0.0.2 [2024-07-15 18:45:43.514200] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:09.103 [2024-07-15 18:45:43.514217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0840) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.103 [2024-07-15 18:45:43.514233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a09c0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.103 [2024-07-15 18:45:43.514245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0b40) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.103 [2024-07-15 18:45:43.514257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.103 [2024-07-15 18:45:43.514276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.103 [2024-07-15 18:45:43.514298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.103 [2024-07-15 18:45:43.514329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.103 [2024-07-15 18:45:43.514402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.103 [2024-07-15 18:45:43.514409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.103 [2024-07-15 18:45:43.514414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.103 [2024-07-15 18:45:43.514443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.103 [2024-07-15 18:45:43.514464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.103 [2024-07-15 18:45:43.514534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.103 [2024-07-15 18:45:43.514541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.103 [2024-07-15 18:45:43.514546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514556] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:09.103 [2024-07-15 18:45:43.514562] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:09.103 [2024-07-15 18:45:43.514572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.103 [2024-07-15 18:45:43.514589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.103 [2024-07-15 18:45:43.514604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.103 [2024-07-15 18:45:43.514653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.103 [2024-07-15 18:45:43.514659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.103 [2024-07-15 18:45:43.514664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.103 [2024-07-15 18:45:43.514695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.103 [2024-07-15 18:45:43.514710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.103 [2024-07-15 18:45:43.514758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.103 [2024-07-15 18:45:43.514764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.103 [2024-07-15 18:45:43.514769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.103 [2024-07-15 18:45:43.514799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.103 [2024-07-15 18:45:43.514815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.103 [2024-07-15 18:45:43.514865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.103 [2024-07-15 18:45:43.514872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.103 [2024-07-15 18:45:43.514876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.514891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.103 [2024-07-15 18:45:43.514907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.103 [2024-07-15 18:45:43.514923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.103 [2024-07-15 18:45:43.514982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.103 [2024-07-15 18:45:43.514989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.103 [2024-07-15 18:45:43.514994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.514998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.515008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.515013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.515018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.103 [2024-07-15 18:45:43.515025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.103 [2024-07-15 18:45:43.515041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.103 [2024-07-15 18:45:43.515089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.103 [2024-07-15 18:45:43.515096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.103 [2024-07-15 18:45:43.515100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.515104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.103 [2024-07-15 18:45:43.515114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.103 [2024-07-15 18:45:43.515119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.515878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.515928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.515935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.515939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.515969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.515978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.515985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.516002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.516049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.516056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.516061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.516075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.516091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.516107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.516157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.516164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.516168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.516182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.516198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.516214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.516259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.516265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.516270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.516284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.516300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.516316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.516363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.516370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.516374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.516389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.516405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.516421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.516468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.516475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.516479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.516493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.516510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.516526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.516577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.104 [2024-07-15 18:45:43.516584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.104 [2024-07-15 18:45:43.516588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.104 [2024-07-15 18:45:43.516602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.104 [2024-07-15 18:45:43.516611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.104 [2024-07-15 18:45:43.516619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.104 [2024-07-15 18:45:43.516634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.104 [2024-07-15 18:45:43.516682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.516689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.516694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.516708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.516724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.516740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.516787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.516794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.516798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.516812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.516829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.516844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.516892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.516898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.516903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.516917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.516926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.516934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.516960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.517870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.517877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.517881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.517895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.517904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.517912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.517927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.521971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.521991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.521997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.522002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.522016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.522022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.522026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205da60) 00:20:09.105 [2024-07-15 18:45:43.522036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.105 [2024-07-15 18:45:43.522069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a0cc0, cid 3, qid 0 00:20:09.105 [2024-07-15 18:45:43.522125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.105 [2024-07-15 18:45:43.522132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.105 [2024-07-15 18:45:43.522136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.105 [2024-07-15 18:45:43.522141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a0cc0) on tqpair=0x205da60 00:20:09.105 [2024-07-15 18:45:43.522149] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:09.105 00:20:09.105 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:09.105 [2024-07-15 18:45:43.563246] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:09.105 [2024-07-15 18:45:43.563296] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87300 ] 00:20:09.367 [2024-07-15 18:45:43.701272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:09.368 [2024-07-15 18:45:43.701342] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:09.368 [2024-07-15 18:45:43.701349] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:09.368 [2024-07-15 18:45:43.701365] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:09.368 [2024-07-15 18:45:43.701374] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:09.368 [2024-07-15 18:45:43.701532] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:09.368 [2024-07-15 18:45:43.701580] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2316a60 0 00:20:09.368 [2024-07-15 18:45:43.714016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:09.368 [2024-07-15 18:45:43.714057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:09.368 [2024-07-15 18:45:43.714066] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:09.368 [2024-07-15 18:45:43.714072] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:09.368 [2024-07-15 18:45:43.714135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.714144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.714151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.714168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:09.368 [2024-07-15 18:45:43.714216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.721989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.722016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.722022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.722043] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:09.368 [2024-07-15 18:45:43.722055] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:09.368 [2024-07-15 18:45:43.722064] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:09.368 [2024-07-15 18:45:43.722090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.722114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.368 [2024-07-15 18:45:43.722156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.722230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.722237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.722242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.722253] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:09.368 [2024-07-15 18:45:43.722262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:09.368 [2024-07-15 18:45:43.722270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.722288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.368 [2024-07-15 18:45:43.722306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.722357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.722364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.722368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.722380] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:09.368 [2024-07-15 18:45:43.722390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:09.368 [2024-07-15 18:45:43.722397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.722414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.368 [2024-07-15 18:45:43.722431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.722478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.722485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.722489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.722500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:09.368 [2024-07-15 18:45:43.722511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.722528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.368 [2024-07-15 18:45:43.722556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.722621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.722628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.722632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.722643] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:09.368 [2024-07-15 18:45:43.722649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:09.368 [2024-07-15 18:45:43.722658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:09.368 [2024-07-15 18:45:43.722764] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:09.368 [2024-07-15 18:45:43.722769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:09.368 [2024-07-15 18:45:43.722780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.722796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.368 [2024-07-15 18:45:43.722813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.722869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.722876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.722880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.722891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:09.368 [2024-07-15 18:45:43.722901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.722911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.722918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.368 [2024-07-15 18:45:43.722935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.722993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.723001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.723005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.723015] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:09.368 [2024-07-15 18:45:43.723021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:09.368 [2024-07-15 18:45:43.723030] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:09.368 [2024-07-15 18:45:43.723041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:09.368 [2024-07-15 18:45:43.723055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.723067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.368 [2024-07-15 18:45:43.723085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.723178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.368 [2024-07-15 18:45:43.723189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.368 [2024-07-15 18:45:43.723194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723200] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=4096, cccid=0 00:20:09.368 [2024-07-15 18:45:43.723206] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2359840) on tqpair(0x2316a60): expected_datao=0, payload_size=4096 00:20:09.368 [2024-07-15 18:45:43.723212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723222] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723227] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.723244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.723249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.723264] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:09.368 [2024-07-15 18:45:43.723270] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:09.368 [2024-07-15 18:45:43.723276] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:09.368 [2024-07-15 18:45:43.723281] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:09.368 [2024-07-15 18:45:43.723287] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:09.368 [2024-07-15 18:45:43.723293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:09.368 [2024-07-15 18:45:43.723304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:09.368 [2024-07-15 18:45:43.723312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.723329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.368 [2024-07-15 18:45:43.723347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.368 [2024-07-15 18:45:43.723402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.368 [2024-07-15 18:45:43.723409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.368 [2024-07-15 18:45:43.723414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.368 [2024-07-15 18:45:43.723427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.368 [2024-07-15 18:45:43.723436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2316a60) 00:20:09.368 [2024-07-15 18:45:43.723443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.369 [2024-07-15 18:45:43.723451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.723467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.369 [2024-07-15 18:45:43.723474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.723490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.369 [2024-07-15 18:45:43.723497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.723512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.369 [2024-07-15 18:45:43.723518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.723532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.723540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.723552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.369 [2024-07-15 18:45:43.723571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359840, cid 0, qid 0 00:20:09.369 [2024-07-15 18:45:43.723578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23599c0, cid 1, qid 0 00:20:09.369 [2024-07-15 18:45:43.723584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359b40, cid 2, qid 0 00:20:09.369 [2024-07-15 18:45:43.723589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.369 [2024-07-15 18:45:43.723595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359e40, cid 4, qid 0 00:20:09.369 [2024-07-15 18:45:43.723683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.369 [2024-07-15 18:45:43.723693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.369 [2024-07-15 18:45:43.723698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359e40) on tqpair=0x2316a60 00:20:09.369 [2024-07-15 18:45:43.723709] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:09.369 [2024-07-15 18:45:43.723719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.723729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.723737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.723744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.723761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.369 [2024-07-15 18:45:43.723779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359e40, cid 4, qid 0 00:20:09.369 [2024-07-15 18:45:43.723831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.369 [2024-07-15 18:45:43.723838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.369 [2024-07-15 18:45:43.723843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359e40) on tqpair=0x2316a60 00:20:09.369 [2024-07-15 18:45:43.723918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.723928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.723937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.723941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.723948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.369 [2024-07-15 18:45:43.723992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359e40, cid 4, qid 0 00:20:09.369 [2024-07-15 18:45:43.724055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.369 [2024-07-15 18:45:43.724062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.369 [2024-07-15 18:45:43.724066] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724071] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=4096, cccid=4 00:20:09.369 [2024-07-15 18:45:43.724077] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2359e40) on tqpair(0x2316a60): expected_datao=0, payload_size=4096 00:20:09.369 [2024-07-15 18:45:43.724083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724091] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724096] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.369 [2024-07-15 18:45:43.724112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.369 [2024-07-15 18:45:43.724116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359e40) on tqpair=0x2316a60 00:20:09.369 [2024-07-15 18:45:43.724138] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:09.369 [2024-07-15 18:45:43.724151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.724183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.369 [2024-07-15 18:45:43.724201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359e40, cid 4, qid 0 00:20:09.369 [2024-07-15 18:45:43.724278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.369 [2024-07-15 18:45:43.724286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.369 [2024-07-15 18:45:43.724290] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724295] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=4096, cccid=4 00:20:09.369 [2024-07-15 18:45:43.724301] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2359e40) on tqpair(0x2316a60): expected_datao=0, payload_size=4096 00:20:09.369 [2024-07-15 18:45:43.724307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724314] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724319] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.369 [2024-07-15 18:45:43.724334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.369 [2024-07-15 18:45:43.724339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359e40) on tqpair=0x2316a60 00:20:09.369 [2024-07-15 18:45:43.724359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2316a60) 00:20:09.369 [2024-07-15 18:45:43.724391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.369 [2024-07-15 18:45:43.724409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359e40, cid 4, qid 0 00:20:09.369 [2024-07-15 18:45:43.724475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.369 [2024-07-15 18:45:43.724483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.369 [2024-07-15 18:45:43.724487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724492] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=4096, cccid=4 00:20:09.369 [2024-07-15 18:45:43.724497] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2359e40) on tqpair(0x2316a60): expected_datao=0, payload_size=4096 00:20:09.369 [2024-07-15 18:45:43.724503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724511] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724516] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.369 [2024-07-15 18:45:43.724532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.369 [2024-07-15 18:45:43.724537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.369 [2024-07-15 18:45:43.724542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359e40) on tqpair=0x2316a60 00:20:09.369 [2024-07-15 18:45:43.724551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724599] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:09.369 [2024-07-15 18:45:43.724605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:09.369 [2024-07-15 18:45:43.724611] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:09.369 [2024-07-15 18:45:43.724630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.724642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.724650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.724666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.370 [2024-07-15 18:45:43.724690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359e40, cid 4, qid 0 00:20:09.370 [2024-07-15 18:45:43.724696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359fc0, cid 5, qid 0 00:20:09.370 [2024-07-15 18:45:43.724763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.724770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.724774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359e40) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.724787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.724794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.724798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359fc0) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.724814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.724827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.724844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359fc0, cid 5, qid 0 00:20:09.370 [2024-07-15 18:45:43.724905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.724912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.724916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359fc0) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.724932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.724937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.724944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.724978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359fc0, cid 5, qid 0 00:20:09.370 [2024-07-15 18:45:43.725041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.725048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.725053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359fc0) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.725069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.725081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.725099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359fc0, cid 5, qid 0 00:20:09.370 [2024-07-15 18:45:43.725150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.725157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.725161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359fc0) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.725185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.725198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.725206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.725218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.725238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.725249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.725261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2316a60) 00:20:09.370 [2024-07-15 18:45:43.725272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.370 [2024-07-15 18:45:43.725290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359fc0, cid 5, qid 0 00:20:09.370 [2024-07-15 18:45:43.725296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359e40, cid 4, qid 0 00:20:09.370 [2024-07-15 18:45:43.725302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a140, cid 6, qid 0 00:20:09.370 [2024-07-15 18:45:43.725307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a2c0, cid 7, qid 0 00:20:09.370 [2024-07-15 18:45:43.725470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.370 [2024-07-15 18:45:43.725481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.370 [2024-07-15 18:45:43.725486] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725491] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=8192, cccid=5 00:20:09.370 [2024-07-15 18:45:43.725497] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2359fc0) on tqpair(0x2316a60): expected_datao=0, payload_size=8192 00:20:09.370 [2024-07-15 18:45:43.725503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725521] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725526] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.370 [2024-07-15 18:45:43.725539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.370 [2024-07-15 18:45:43.725544] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725548] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=512, cccid=4 00:20:09.370 [2024-07-15 18:45:43.725554] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2359e40) on tqpair(0x2316a60): expected_datao=0, payload_size=512 00:20:09.370 [2024-07-15 18:45:43.725560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725567] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725571] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.370 [2024-07-15 18:45:43.725585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.370 [2024-07-15 18:45:43.725589] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725594] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=512, cccid=6 00:20:09.370 [2024-07-15 18:45:43.725599] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x235a140) on tqpair(0x2316a60): expected_datao=0, payload_size=512 00:20:09.370 [2024-07-15 18:45:43.725605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725612] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725616] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.370 [2024-07-15 18:45:43.725630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.370 [2024-07-15 18:45:43.725634] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725639] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2316a60): datao=0, datal=4096, cccid=7 00:20:09.370 [2024-07-15 18:45:43.725644] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x235a2c0) on tqpair(0x2316a60): expected_datao=0, payload_size=4096 00:20:09.370 [2024-07-15 18:45:43.725650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725658] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725662] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.725675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.725680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359fc0) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.725704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.725711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.725715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359e40) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.725734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.725741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.725745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a140) on tqpair=0x2316a60 00:20:09.370 [2024-07-15 18:45:43.725758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.370 [2024-07-15 18:45:43.725765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.370 [2024-07-15 18:45:43.725769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.370 [2024-07-15 18:45:43.725774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a2c0) on tqpair=0x2316a60 00:20:09.370 ===================================================== 00:20:09.370 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.370 ===================================================== 00:20:09.370 Controller Capabilities/Features 00:20:09.370 ================================ 00:20:09.370 Vendor ID: 8086 00:20:09.370 Subsystem Vendor ID: 8086 00:20:09.370 Serial Number: SPDK00000000000001 00:20:09.370 Model Number: SPDK bdev Controller 00:20:09.370 Firmware Version: 24.09 00:20:09.370 Recommended Arb Burst: 6 00:20:09.370 IEEE OUI Identifier: e4 d2 5c 00:20:09.370 Multi-path I/O 00:20:09.370 May have multiple subsystem ports: Yes 00:20:09.371 May have multiple controllers: Yes 00:20:09.371 Associated with SR-IOV VF: No 00:20:09.371 Max Data Transfer Size: 131072 00:20:09.371 Max Number of Namespaces: 32 00:20:09.371 Max Number of I/O Queues: 127 00:20:09.371 NVMe Specification Version (VS): 1.3 00:20:09.371 NVMe Specification Version (Identify): 1.3 00:20:09.371 Maximum Queue Entries: 128 00:20:09.371 Contiguous Queues Required: Yes 00:20:09.371 Arbitration Mechanisms Supported 00:20:09.371 Weighted Round Robin: Not Supported 00:20:09.371 Vendor Specific: Not Supported 00:20:09.371 Reset Timeout: 15000 ms 00:20:09.371 Doorbell Stride: 4 bytes 00:20:09.371 NVM Subsystem Reset: Not Supported 00:20:09.371 Command Sets Supported 00:20:09.371 NVM Command Set: Supported 00:20:09.371 Boot Partition: Not Supported 00:20:09.371 Memory Page Size Minimum: 4096 bytes 00:20:09.371 Memory Page Size Maximum: 4096 bytes 00:20:09.371 Persistent Memory Region: Not Supported 00:20:09.371 Optional Asynchronous Events Supported 00:20:09.371 Namespace Attribute Notices: Supported 00:20:09.371 Firmware Activation Notices: Not Supported 00:20:09.371 ANA Change Notices: Not Supported 00:20:09.371 PLE Aggregate Log Change Notices: Not Supported 00:20:09.371 LBA Status Info Alert Notices: Not Supported 00:20:09.371 EGE Aggregate Log Change Notices: Not Supported 00:20:09.371 Normal NVM Subsystem Shutdown event: Not Supported 00:20:09.371 Zone Descriptor Change Notices: Not Supported 00:20:09.371 Discovery Log Change Notices: Not Supported 00:20:09.371 Controller Attributes 00:20:09.371 128-bit Host Identifier: Supported 00:20:09.371 Non-Operational Permissive Mode: Not Supported 00:20:09.371 NVM Sets: Not Supported 00:20:09.371 Read Recovery Levels: Not Supported 00:20:09.371 Endurance Groups: Not Supported 00:20:09.371 Predictable Latency Mode: Not Supported 00:20:09.371 Traffic Based Keep ALive: Not Supported 00:20:09.371 Namespace Granularity: Not Supported 00:20:09.371 SQ Associations: Not Supported 00:20:09.371 UUID List: Not Supported 00:20:09.371 Multi-Domain Subsystem: Not Supported 00:20:09.371 Fixed Capacity Management: Not Supported 00:20:09.371 Variable Capacity Management: Not Supported 00:20:09.371 Delete Endurance Group: Not Supported 00:20:09.371 Delete NVM Set: Not Supported 00:20:09.371 Extended LBA Formats Supported: Not Supported 00:20:09.371 Flexible Data Placement Supported: Not Supported 00:20:09.371 00:20:09.371 Controller Memory Buffer Support 00:20:09.371 ================================ 00:20:09.371 Supported: No 00:20:09.371 00:20:09.371 Persistent Memory Region Support 00:20:09.371 ================================ 00:20:09.371 Supported: No 00:20:09.371 00:20:09.371 Admin Command Set Attributes 00:20:09.371 ============================ 00:20:09.371 Security Send/Receive: Not Supported 00:20:09.371 Format NVM: Not Supported 00:20:09.371 Firmware Activate/Download: Not Supported 00:20:09.371 Namespace Management: Not Supported 00:20:09.371 Device Self-Test: Not Supported 00:20:09.371 Directives: Not Supported 00:20:09.371 NVMe-MI: Not Supported 00:20:09.371 Virtualization Management: Not Supported 00:20:09.371 Doorbell Buffer Config: Not Supported 00:20:09.371 Get LBA Status Capability: Not Supported 00:20:09.371 Command & Feature Lockdown Capability: Not Supported 00:20:09.371 Abort Command Limit: 4 00:20:09.371 Async Event Request Limit: 4 00:20:09.371 Number of Firmware Slots: N/A 00:20:09.371 Firmware Slot 1 Read-Only: N/A 00:20:09.371 Firmware Activation Without Reset: N/A 00:20:09.371 Multiple Update Detection Support: N/A 00:20:09.371 Firmware Update Granularity: No Information Provided 00:20:09.371 Per-Namespace SMART Log: No 00:20:09.371 Asymmetric Namespace Access Log Page: Not Supported 00:20:09.371 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:09.371 Command Effects Log Page: Supported 00:20:09.371 Get Log Page Extended Data: Supported 00:20:09.371 Telemetry Log Pages: Not Supported 00:20:09.371 Persistent Event Log Pages: Not Supported 00:20:09.371 Supported Log Pages Log Page: May Support 00:20:09.371 Commands Supported & Effects Log Page: Not Supported 00:20:09.371 Feature Identifiers & Effects Log Page:May Support 00:20:09.371 NVMe-MI Commands & Effects Log Page: May Support 00:20:09.371 Data Area 4 for Telemetry Log: Not Supported 00:20:09.371 Error Log Page Entries Supported: 128 00:20:09.371 Keep Alive: Supported 00:20:09.371 Keep Alive Granularity: 10000 ms 00:20:09.371 00:20:09.371 NVM Command Set Attributes 00:20:09.371 ========================== 00:20:09.371 Submission Queue Entry Size 00:20:09.371 Max: 64 00:20:09.371 Min: 64 00:20:09.371 Completion Queue Entry Size 00:20:09.371 Max: 16 00:20:09.371 Min: 16 00:20:09.371 Number of Namespaces: 32 00:20:09.371 Compare Command: Supported 00:20:09.371 Write Uncorrectable Command: Not Supported 00:20:09.371 Dataset Management Command: Supported 00:20:09.371 Write Zeroes Command: Supported 00:20:09.371 Set Features Save Field: Not Supported 00:20:09.371 Reservations: Supported 00:20:09.371 Timestamp: Not Supported 00:20:09.371 Copy: Supported 00:20:09.371 Volatile Write Cache: Present 00:20:09.371 Atomic Write Unit (Normal): 1 00:20:09.371 Atomic Write Unit (PFail): 1 00:20:09.371 Atomic Compare & Write Unit: 1 00:20:09.371 Fused Compare & Write: Supported 00:20:09.371 Scatter-Gather List 00:20:09.371 SGL Command Set: Supported 00:20:09.371 SGL Keyed: Supported 00:20:09.371 SGL Bit Bucket Descriptor: Not Supported 00:20:09.371 SGL Metadata Pointer: Not Supported 00:20:09.371 Oversized SGL: Not Supported 00:20:09.371 SGL Metadata Address: Not Supported 00:20:09.371 SGL Offset: Supported 00:20:09.371 Transport SGL Data Block: Not Supported 00:20:09.371 Replay Protected Memory Block: Not Supported 00:20:09.371 00:20:09.371 Firmware Slot Information 00:20:09.371 ========================= 00:20:09.371 Active slot: 1 00:20:09.371 Slot 1 Firmware Revision: 24.09 00:20:09.371 00:20:09.371 00:20:09.371 Commands Supported and Effects 00:20:09.371 ============================== 00:20:09.371 Admin Commands 00:20:09.371 -------------- 00:20:09.371 Get Log Page (02h): Supported 00:20:09.371 Identify (06h): Supported 00:20:09.371 Abort (08h): Supported 00:20:09.371 Set Features (09h): Supported 00:20:09.371 Get Features (0Ah): Supported 00:20:09.371 Asynchronous Event Request (0Ch): Supported 00:20:09.371 Keep Alive (18h): Supported 00:20:09.371 I/O Commands 00:20:09.371 ------------ 00:20:09.371 Flush (00h): Supported LBA-Change 00:20:09.371 Write (01h): Supported LBA-Change 00:20:09.371 Read (02h): Supported 00:20:09.371 Compare (05h): Supported 00:20:09.371 Write Zeroes (08h): Supported LBA-Change 00:20:09.371 Dataset Management (09h): Supported LBA-Change 00:20:09.371 Copy (19h): Supported LBA-Change 00:20:09.371 00:20:09.371 Error Log 00:20:09.371 ========= 00:20:09.371 00:20:09.371 Arbitration 00:20:09.371 =========== 00:20:09.371 Arbitration Burst: 1 00:20:09.371 00:20:09.371 Power Management 00:20:09.371 ================ 00:20:09.371 Number of Power States: 1 00:20:09.371 Current Power State: Power State #0 00:20:09.371 Power State #0: 00:20:09.371 Max Power: 0.00 W 00:20:09.371 Non-Operational State: Operational 00:20:09.371 Entry Latency: Not Reported 00:20:09.371 Exit Latency: Not Reported 00:20:09.371 Relative Read Throughput: 0 00:20:09.371 Relative Read Latency: 0 00:20:09.371 Relative Write Throughput: 0 00:20:09.371 Relative Write Latency: 0 00:20:09.371 Idle Power: Not Reported 00:20:09.371 Active Power: Not Reported 00:20:09.371 Non-Operational Permissive Mode: Not Supported 00:20:09.371 00:20:09.371 Health Information 00:20:09.371 ================== 00:20:09.371 Critical Warnings: 00:20:09.371 Available Spare Space: OK 00:20:09.371 Temperature: OK 00:20:09.371 Device Reliability: OK 00:20:09.371 Read Only: No 00:20:09.371 Volatile Memory Backup: OK 00:20:09.371 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:09.371 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:09.371 Available Spare: 0% 00:20:09.371 Available Spare Threshold: 0% 00:20:09.371 Life Percentage Used:[2024-07-15 18:45:43.725892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.371 [2024-07-15 18:45:43.725898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2316a60) 00:20:09.371 [2024-07-15 18:45:43.725906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.371 [2024-07-15 18:45:43.725929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x235a2c0, cid 7, qid 0 00:20:09.371 [2024-07-15 18:45:43.730005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.371 [2024-07-15 18:45:43.730033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.371 [2024-07-15 18:45:43.730038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.371 [2024-07-15 18:45:43.730044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x235a2c0) on tqpair=0x2316a60 00:20:09.371 [2024-07-15 18:45:43.730101] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:09.371 [2024-07-15 18:45:43.730114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359840) on tqpair=0x2316a60 00:20:09.371 [2024-07-15 18:45:43.730123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.371 [2024-07-15 18:45:43.730130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23599c0) on tqpair=0x2316a60 00:20:09.371 [2024-07-15 18:45:43.730136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.372 [2024-07-15 18:45:43.730142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359b40) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.372 [2024-07-15 18:45:43.730155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.372 [2024-07-15 18:45:43.730175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.730196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.730235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.730296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.730303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.730307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.730338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.730357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.730426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.730433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.730437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730448] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:09.372 [2024-07-15 18:45:43.730454] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:09.372 [2024-07-15 18:45:43.730464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.730481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.730498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.730550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.730561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.730566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.730610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.730626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.730676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.730686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.730691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.730722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.730737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.730783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.730790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.730794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.730825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.730840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.730888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.730894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.730899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.730914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.730923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.730930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.730957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.731007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.731014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.731019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.731033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.731050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.731067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.731117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.731124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.731128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.731143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.731160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.731175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.731223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.731230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.731234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.372 [2024-07-15 18:45:43.731249] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.372 [2024-07-15 18:45:43.731258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.372 [2024-07-15 18:45:43.731265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.372 [2024-07-15 18:45:43.731281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.372 [2024-07-15 18:45:43.731329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.372 [2024-07-15 18:45:43.731339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.372 [2024-07-15 18:45:43.731343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.731358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.731374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.731390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.731441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.731451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.731455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.731470] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.731486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.731503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.731550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.731557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.731561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.731576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.731592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.731608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.731656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.731662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.731667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.731681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.731698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.731714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.731762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.731769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.731774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.731788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.731805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.731820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.731869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.731879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.731883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731888] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.731898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.731907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.731914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.731930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.731988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.731996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.732000] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.732015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.732031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.732048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.732093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.732099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.732104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.732118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.732135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.732150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.732200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.732210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.732215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.732230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.732246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.732263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.732313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.732320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.732324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.732339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.732355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.732371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.732421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.732428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.732432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.732447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.732463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.732479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.732531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.732540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.732545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.732560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.732576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.373 [2024-07-15 18:45:43.732592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.373 [2024-07-15 18:45:43.732636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.373 [2024-07-15 18:45:43.732647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.373 [2024-07-15 18:45:43.732651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.373 [2024-07-15 18:45:43.732666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.373 [2024-07-15 18:45:43.732675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.373 [2024-07-15 18:45:43.732682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.732698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.732749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.732758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.732763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.732768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.732778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.732783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.732787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.732794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.732810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.732858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.732864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.732869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.732873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.732883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.732888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.732892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.732899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.732915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.732974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.732981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.732986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.732990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.733892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.733899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.733903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.733918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.733928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.733935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.733952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.737995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.738035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.738042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.738049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.374 [2024-07-15 18:45:43.738072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.738079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.738085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2316a60) 00:20:09.374 [2024-07-15 18:45:43.738099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.374 [2024-07-15 18:45:43.738141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2359cc0, cid 3, qid 0 00:20:09.374 [2024-07-15 18:45:43.738202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.374 [2024-07-15 18:45:43.738210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.374 [2024-07-15 18:45:43.738216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.374 [2024-07-15 18:45:43.738222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2359cc0) on tqpair=0x2316a60 00:20:09.375 [2024-07-15 18:45:43.738232] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:09.375 0% 00:20:09.375 Data Units Read: 0 00:20:09.375 Data Units Written: 0 00:20:09.375 Host Read Commands: 0 00:20:09.375 Host Write Commands: 0 00:20:09.375 Controller Busy Time: 0 minutes 00:20:09.375 Power Cycles: 0 00:20:09.375 Power On Hours: 0 hours 00:20:09.375 Unsafe Shutdowns: 0 00:20:09.375 Unrecoverable Media Errors: 0 00:20:09.375 Lifetime Error Log Entries: 0 00:20:09.375 Warning Temperature Time: 0 minutes 00:20:09.375 Critical Temperature Time: 0 minutes 00:20:09.375 00:20:09.375 Number of Queues 00:20:09.375 ================ 00:20:09.375 Number of I/O Submission Queues: 127 00:20:09.375 Number of I/O Completion Queues: 127 00:20:09.375 00:20:09.375 Active Namespaces 00:20:09.375 ================= 00:20:09.375 Namespace ID:1 00:20:09.375 Error Recovery Timeout: Unlimited 00:20:09.375 Command Set Identifier: NVM (00h) 00:20:09.375 Deallocate: Supported 00:20:09.375 Deallocated/Unwritten Error: Not Supported 00:20:09.375 Deallocated Read Value: Unknown 00:20:09.375 Deallocate in Write Zeroes: Not Supported 00:20:09.375 Deallocated Guard Field: 0xFFFF 00:20:09.375 Flush: Supported 00:20:09.375 Reservation: Supported 00:20:09.375 Namespace Sharing Capabilities: Multiple Controllers 00:20:09.375 Size (in LBAs): 131072 (0GiB) 00:20:09.375 Capacity (in LBAs): 131072 (0GiB) 00:20:09.375 Utilization (in LBAs): 131072 (0GiB) 00:20:09.375 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:09.375 EUI64: ABCDEF0123456789 00:20:09.375 UUID: e5203ba7-d491-4b0c-af7f-4877aa960a51 00:20:09.375 Thin Provisioning: Not Supported 00:20:09.375 Per-NS Atomic Units: Yes 00:20:09.375 Atomic Boundary Size (Normal): 0 00:20:09.375 Atomic Boundary Size (PFail): 0 00:20:09.375 Atomic Boundary Offset: 0 00:20:09.375 Maximum Single Source Range Length: 65535 00:20:09.375 Maximum Copy Length: 65535 00:20:09.375 Maximum Source Range Count: 1 00:20:09.375 NGUID/EUI64 Never Reused: No 00:20:09.375 Namespace Write Protected: No 00:20:09.375 Number of LBA Formats: 1 00:20:09.375 Current LBA Format: LBA Format #00 00:20:09.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:09.375 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.375 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.375 rmmod nvme_tcp 00:20:09.375 rmmod nvme_fabrics 00:20:09.634 rmmod nvme_keyring 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 87239 ']' 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 87239 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 87239 ']' 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 87239 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87239 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:09.634 killing process with pid 87239 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87239' 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 87239 00:20:09.634 18:45:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 87239 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:09.944 00:20:09.944 real 0m2.740s 00:20:09.944 user 0m7.376s 00:20:09.944 sys 0m0.764s 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.944 18:45:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.944 ************************************ 00:20:09.944 END TEST nvmf_identify 00:20:09.944 ************************************ 00:20:09.944 18:45:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:09.944 18:45:44 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:09.944 18:45:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:09.944 18:45:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.944 18:45:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.944 ************************************ 00:20:09.944 START TEST nvmf_perf 00:20:09.944 ************************************ 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:09.944 * Looking for test storage... 00:20:09.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:09.944 Cannot find device "nvmf_tgt_br" 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:09.944 Cannot find device "nvmf_tgt_br2" 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:09.944 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:09.944 Cannot find device "nvmf_tgt_br" 00:20:10.202 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:20:10.202 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:10.202 Cannot find device "nvmf_tgt_br2" 00:20:10.202 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:20:10.202 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:10.203 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:10.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:20:10.460 00:20:10.460 --- 10.0.0.2 ping statistics --- 00:20:10.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.460 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:10.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:10.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:20:10.460 00:20:10.460 --- 10.0.0.3 ping statistics --- 00:20:10.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.460 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:10.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:20:10.460 00:20:10.460 --- 10.0.0.1 ping statistics --- 00:20:10.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.460 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=87466 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 87466 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 87466 ']' 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.460 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.461 18:45:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:10.461 [2024-07-15 18:45:44.791256] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:10.461 [2024-07-15 18:45:44.791371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.461 [2024-07-15 18:45:44.933244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.718 [2024-07-15 18:45:45.039994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.718 [2024-07-15 18:45:45.040055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.718 [2024-07-15 18:45:45.040067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.718 [2024-07-15 18:45:45.040076] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.718 [2024-07-15 18:45:45.040084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.718 [2024-07-15 18:45:45.040304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.718 [2024-07-15 18:45:45.040423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.718 [2024-07-15 18:45:45.041036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.718 [2024-07-15 18:45:45.041037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:11.651 18:45:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:11.908 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:11.908 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:12.166 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:12.166 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:12.424 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:12.424 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:12.424 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:12.424 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:12.424 18:45:46 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.682 [2024-07-15 18:45:47.036980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.682 18:45:47 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.940 18:45:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:12.940 18:45:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:13.198 18:45:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:13.199 18:45:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:13.457 18:45:47 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.715 [2024-07-15 18:45:48.148816] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.715 18:45:48 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:14.281 18:45:48 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:14.281 18:45:48 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:14.281 18:45:48 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:14.281 18:45:48 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:15.215 Initializing NVMe Controllers 00:20:15.215 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:15.215 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:15.215 Initialization complete. Launching workers. 00:20:15.215 ======================================================== 00:20:15.215 Latency(us) 00:20:15.215 Device Information : IOPS MiB/s Average min max 00:20:15.215 PCIE (0000:00:10.0) NSID 1 from core 0: 21824.00 85.25 1465.92 398.92 6280.79 00:20:15.215 ======================================================== 00:20:15.215 Total : 21824.00 85.25 1465.92 398.92 6280.79 00:20:15.215 00:20:15.215 18:45:49 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:16.605 Initializing NVMe Controllers 00:20:16.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:16.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:16.605 Initialization complete. Launching workers. 00:20:16.605 ======================================================== 00:20:16.605 Latency(us) 00:20:16.605 Device Information : IOPS MiB/s Average min max 00:20:16.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3443.99 13.45 290.11 101.50 7196.31 00:20:16.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8194.30 4984.28 12064.39 00:20:16.605 ======================================================== 00:20:16.605 Total : 3566.99 13.93 562.67 101.50 12064.39 00:20:16.605 00:20:16.605 18:45:50 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:17.975 Initializing NVMe Controllers 00:20:17.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.975 Initialization complete. Launching workers. 00:20:17.975 ======================================================== 00:20:17.975 Latency(us) 00:20:17.975 Device Information : IOPS MiB/s Average min max 00:20:17.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8537.23 33.35 3751.88 557.04 8505.46 00:20:17.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2719.44 10.62 11878.82 6574.33 20276.06 00:20:17.975 ======================================================== 00:20:17.975 Total : 11256.67 43.97 5715.22 557.04 20276.06 00:20:17.975 00:20:17.975 18:45:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:17.975 18:45:52 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.510 Initializing NVMe Controllers 00:20:20.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.510 Controller IO queue size 128, less than required. 00:20:20.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.510 Controller IO queue size 128, less than required. 00:20:20.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:20.510 Initialization complete. Launching workers. 00:20:20.510 ======================================================== 00:20:20.510 Latency(us) 00:20:20.510 Device Information : IOPS MiB/s Average min max 00:20:20.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1627.11 406.78 80159.37 55396.60 143868.47 00:20:20.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 535.88 133.97 245804.09 71323.54 417029.94 00:20:20.510 ======================================================== 00:20:20.510 Total : 2162.99 540.75 121197.65 55396.60 417029.94 00:20:20.510 00:20:20.510 18:45:54 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:20.769 Initializing NVMe Controllers 00:20:20.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.769 Controller IO queue size 128, less than required. 00:20:20.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.769 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:20.769 Controller IO queue size 128, less than required. 00:20:20.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.769 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:20.769 WARNING: Some requested NVMe devices were skipped 00:20:20.769 No valid NVMe controllers or AIO or URING devices found 00:20:20.769 18:45:55 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:23.322 Initializing NVMe Controllers 00:20:23.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.322 Controller IO queue size 128, less than required. 00:20:23.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:23.322 Controller IO queue size 128, less than required. 00:20:23.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:23.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:23.322 Initialization complete. Launching workers. 00:20:23.322 00:20:23.322 ==================== 00:20:23.322 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:23.322 TCP transport: 00:20:23.322 polls: 7473 00:20:23.322 idle_polls: 4542 00:20:23.322 sock_completions: 2931 00:20:23.322 nvme_completions: 5433 00:20:23.322 submitted_requests: 8082 00:20:23.322 queued_requests: 1 00:20:23.322 00:20:23.322 ==================== 00:20:23.322 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:23.322 TCP transport: 00:20:23.322 polls: 9869 00:20:23.322 idle_polls: 6669 00:20:23.322 sock_completions: 3200 00:20:23.322 nvme_completions: 5999 00:20:23.322 submitted_requests: 9014 00:20:23.322 queued_requests: 1 00:20:23.322 ======================================================== 00:20:23.322 Latency(us) 00:20:23.322 Device Information : IOPS MiB/s Average min max 00:20:23.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1357.09 339.27 97412.44 61963.49 145089.66 00:20:23.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1498.50 374.62 86780.69 32731.35 127840.68 00:20:23.322 ======================================================== 00:20:23.322 Total : 2855.59 713.90 91833.33 32731.35 145089.66 00:20:23.322 00:20:23.322 18:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:23.322 18:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:23.580 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.581 rmmod nvme_tcp 00:20:23.581 rmmod nvme_fabrics 00:20:23.581 rmmod nvme_keyring 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 87466 ']' 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 87466 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 87466 ']' 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 87466 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.581 18:45:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87466 00:20:23.581 killing process with pid 87466 00:20:23.581 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:23.581 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:23.581 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87466' 00:20:23.581 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 87466 00:20:23.581 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 87466 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:24.515 00:20:24.515 real 0m14.458s 00:20:24.515 user 0m52.980s 00:20:24.515 sys 0m3.771s 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.515 18:45:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.515 ************************************ 00:20:24.515 END TEST nvmf_perf 00:20:24.515 ************************************ 00:20:24.515 18:45:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:24.515 18:45:58 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:24.515 18:45:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:24.515 18:45:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.515 18:45:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.515 ************************************ 00:20:24.515 START TEST nvmf_fio_host 00:20:24.515 ************************************ 00:20:24.515 18:45:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:24.515 * Looking for test storage... 00:20:24.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:24.515 18:45:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.515 18:45:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.515 18:45:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:24.516 Cannot find device "nvmf_tgt_br" 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.516 Cannot find device "nvmf_tgt_br2" 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:24.516 Cannot find device "nvmf_tgt_br" 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:24.516 Cannot find device "nvmf_tgt_br2" 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:20:24.516 18:45:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:24.774 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.775 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.775 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.775 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.775 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.775 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:25.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:25.033 00:20:25.033 --- 10.0.0.2 ping statistics --- 00:20:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.033 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:25.033 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:25.033 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:20:25.033 00:20:25.033 --- 10.0.0.3 ping statistics --- 00:20:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.033 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:25.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:20:25.033 00:20:25.033 --- 10.0.0.1 ping statistics --- 00:20:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.033 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87943 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87943 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87943 ']' 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.033 18:45:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.033 [2024-07-15 18:45:59.369036] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:25.033 [2024-07-15 18:45:59.369161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.290 [2024-07-15 18:45:59.520666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.290 [2024-07-15 18:45:59.642707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.290 [2024-07-15 18:45:59.642786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.290 [2024-07-15 18:45:59.642804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.290 [2024-07-15 18:45:59.642820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.290 [2024-07-15 18:45:59.642835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.290 [2024-07-15 18:45:59.643009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.290 [2024-07-15 18:45:59.643408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.290 [2024-07-15 18:45:59.643676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.290 [2024-07-15 18:45:59.643693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.856 18:46:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.856 18:46:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:25.856 18:46:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:26.114 [2024-07-15 18:46:00.515160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.114 18:46:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:26.114 18:46:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.114 18:46:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.398 18:46:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:26.657 Malloc1 00:20:26.657 18:46:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.915 18:46:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:26.915 18:46:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.173 [2024-07-15 18:46:01.547959] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.173 18:46:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:27.455 18:46:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:27.714 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:27.714 fio-3.35 00:20:27.714 Starting 1 thread 00:20:30.241 00:20:30.241 test: (groupid=0, jobs=1): err= 0: pid=88074: Mon Jul 15 18:46:04 2024 00:20:30.241 read: IOPS=9852, BW=38.5MiB/s (40.4MB/s)(77.2MiB/2006msec) 00:20:30.241 slat (nsec): min=1844, max=413882, avg=2135.09, stdev=3659.35 00:20:30.241 clat (usec): min=3322, max=12360, avg=6772.82, stdev=530.44 00:20:30.241 lat (usec): min=3324, max=12362, avg=6774.96, stdev=530.31 00:20:30.241 clat percentiles (usec): 00:20:30.241 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:20:30.241 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6849], 00:20:30.241 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7439], 00:20:30.242 | 99.00th=[ 7898], 99.50th=[ 8586], 99.90th=[11731], 99.95th=[12125], 00:20:30.242 | 99.99th=[12387] 00:20:30.242 bw ( KiB/s): min=38880, max=39696, per=99.92%, avg=39380.00, stdev=366.87, samples=4 00:20:30.242 iops : min= 9720, max= 9924, avg=9845.00, stdev=91.72, samples=4 00:20:30.242 write: IOPS=9861, BW=38.5MiB/s (40.4MB/s)(77.3MiB/2006msec); 0 zone resets 00:20:30.242 slat (nsec): min=1920, max=322943, avg=2290.57, stdev=2552.38 00:20:30.242 clat (usec): min=2924, max=11531, avg=6153.20, stdev=434.05 00:20:30.242 lat (usec): min=2963, max=11533, avg=6155.49, stdev=433.91 00:20:30.242 clat percentiles (usec): 00:20:30.242 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5866], 00:20:30.242 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:20:30.242 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6718], 00:20:30.242 | 99.00th=[ 7046], 99.50th=[ 7242], 99.90th=[ 9896], 99.95th=[10290], 00:20:30.242 | 99.99th=[10814] 00:20:30.242 bw ( KiB/s): min=39072, max=40128, per=100.00%, avg=39458.00, stdev=461.34, samples=4 00:20:30.242 iops : min= 9768, max=10032, avg=9864.50, stdev=115.34, samples=4 00:20:30.242 lat (msec) : 4=0.22%, 10=99.55%, 20=0.23% 00:20:30.242 cpu : usr=63.39%, sys=27.63%, ctx=19, majf=0, minf=6 00:20:30.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:30.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.242 issued rwts: total=19764,19782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.242 00:20:30.242 Run status group 0 (all jobs): 00:20:30.242 READ: bw=38.5MiB/s (40.4MB/s), 38.5MiB/s-38.5MiB/s (40.4MB/s-40.4MB/s), io=77.2MiB (81.0MB), run=2006-2006msec 00:20:30.242 WRITE: bw=38.5MiB/s (40.4MB/s), 38.5MiB/s-38.5MiB/s (40.4MB/s-40.4MB/s), io=77.3MiB (81.0MB), run=2006-2006msec 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:30.242 18:46:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:30.242 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:30.242 fio-3.35 00:20:30.242 Starting 1 thread 00:20:32.770 00:20:32.770 test: (groupid=0, jobs=1): err= 0: pid=88117: Mon Jul 15 18:46:06 2024 00:20:32.770 read: IOPS=8859, BW=138MiB/s (145MB/s)(278MiB/2007msec) 00:20:32.770 slat (usec): min=2, max=131, avg= 3.36, stdev= 1.99 00:20:32.771 clat (usec): min=2604, max=17491, avg=8642.06, stdev=2130.09 00:20:32.771 lat (usec): min=2608, max=17494, avg=8645.41, stdev=2130.15 00:20:32.771 clat percentiles (usec): 00:20:32.771 | 1.00th=[ 4555], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6718], 00:20:32.771 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9241], 00:20:32.771 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11207], 95.00th=[11994], 00:20:32.771 | 99.00th=[14484], 99.50th=[15270], 99.90th=[16450], 99.95th=[16581], 00:20:32.771 | 99.99th=[16712] 00:20:32.771 bw ( KiB/s): min=65152, max=74848, per=49.41%, avg=70036.00, stdev=4850.04, samples=4 00:20:32.771 iops : min= 4072, max= 4678, avg=4377.25, stdev=303.13, samples=4 00:20:32.771 write: IOPS=5112, BW=79.9MiB/s (83.8MB/s)(143MiB/1784msec); 0 zone resets 00:20:32.771 slat (usec): min=34, max=462, avg=37.23, stdev= 8.58 00:20:32.771 clat (usec): min=3331, max=18658, avg=10446.38, stdev=1691.41 00:20:32.771 lat (usec): min=3367, max=18693, avg=10483.61, stdev=1691.56 00:20:32.771 clat percentiles (usec): 00:20:32.771 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8979], 00:20:32.771 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:20:32.771 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12649], 95.00th=[13566], 00:20:32.771 | 99.00th=[15270], 99.50th=[16450], 99.90th=[17171], 99.95th=[18220], 00:20:32.771 | 99.99th=[18744] 00:20:32.771 bw ( KiB/s): min=68000, max=78752, per=89.07%, avg=72862.50, stdev=4726.18, samples=4 00:20:32.771 iops : min= 4250, max= 4922, avg=4553.75, stdev=295.50, samples=4 00:20:32.771 lat (msec) : 4=0.30%, 10=61.85%, 20=37.84% 00:20:32.771 cpu : usr=72.18%, sys=18.00%, ctx=5, majf=0, minf=16 00:20:32.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:32.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:32.771 issued rwts: total=17781,9121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:32.771 00:20:32.771 Run status group 0 (all jobs): 00:20:32.771 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=278MiB (291MB), run=2007-2007msec 00:20:32.771 WRITE: bw=79.9MiB/s (83.8MB/s), 79.9MiB/s-79.9MiB/s (83.8MB/s-83.8MB/s), io=143MiB (149MB), run=1784-1784msec 00:20:32.771 18:46:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.771 rmmod nvme_tcp 00:20:32.771 rmmod nvme_fabrics 00:20:32.771 rmmod nvme_keyring 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87943 ']' 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87943 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87943 ']' 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87943 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87943 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87943' 00:20:32.771 killing process with pid 87943 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87943 00:20:32.771 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87943 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:33.030 00:20:33.030 real 0m8.691s 00:20:33.030 user 0m34.759s 00:20:33.030 sys 0m2.571s 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:33.030 ************************************ 00:20:33.030 END TEST nvmf_fio_host 00:20:33.030 ************************************ 00:20:33.030 18:46:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.030 18:46:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:33.030 18:46:07 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:33.030 18:46:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:33.030 18:46:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.030 18:46:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:33.289 ************************************ 00:20:33.289 START TEST nvmf_failover 00:20:33.289 ************************************ 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:33.289 * Looking for test storage... 00:20:33.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:33.289 18:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:33.290 Cannot find device "nvmf_tgt_br" 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.290 Cannot find device "nvmf_tgt_br2" 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:33.290 Cannot find device "nvmf_tgt_br" 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:33.290 Cannot find device "nvmf_tgt_br2" 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:33.290 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:33.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:20:33.549 00:20:33.549 --- 10.0.0.2 ping statistics --- 00:20:33.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.549 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:33.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:33.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:20:33.549 00:20:33.549 --- 10.0.0.3 ping statistics --- 00:20:33.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.549 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:33.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:20:33.549 00:20:33.549 --- 10.0.0.1 ping statistics --- 00:20:33.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.549 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.549 18:46:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=88341 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 88341 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88341 ']' 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.549 18:46:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:33.807 [2024-07-15 18:46:08.082063] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:33.807 [2024-07-15 18:46:08.082183] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.807 [2024-07-15 18:46:08.228104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:34.066 [2024-07-15 18:46:08.376493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.066 [2024-07-15 18:46:08.376772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.066 [2024-07-15 18:46:08.376870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.066 [2024-07-15 18:46:08.376921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.066 [2024-07-15 18:46:08.376962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.066 [2024-07-15 18:46:08.377141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.066 [2024-07-15 18:46:08.378142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.066 [2024-07-15 18:46:08.378146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.632 18:46:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.632 18:46:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:34.632 18:46:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.632 18:46:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.632 18:46:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:34.632 18:46:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.632 18:46:09 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:35.198 [2024-07-15 18:46:09.425906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.198 18:46:09 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:35.198 Malloc0 00:20:35.456 18:46:09 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:35.781 18:46:09 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.038 18:46:10 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.294 [2024-07-15 18:46:10.540549] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.294 18:46:10 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:36.294 [2024-07-15 18:46:10.772739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:36.551 18:46:10 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:36.551 [2024-07-15 18:46:10.989004] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88454 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88454 /var/tmp/bdevperf.sock 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88454 ']' 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.551 18:46:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:37.922 18:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.922 18:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:37.922 18:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:37.922 NVMe0n1 00:20:37.922 18:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.487 00:20:38.487 18:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88496 00:20:38.487 18:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.487 18:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:39.418 18:46:13 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.675 [2024-07-15 18:46:13.935756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.935986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.936005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.936024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 [2024-07-15 18:46:13.936041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4f80 is same with the state(5) to be set 00:20:39.675 18:46:13 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:42.984 18:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:42.984 00:20:42.984 18:46:17 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:42.984 [2024-07-15 18:46:17.456113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.984 [2024-07-15 18:46:17.456538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.985 [2024-07-15 18:46:17.456548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:42.985 [2024-07-15 18:46:17.456558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5e10 is same with the state(5) to be set 00:20:43.241 18:46:17 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:46.547 18:46:20 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.547 [2024-07-15 18:46:20.739904] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.547 18:46:20 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:47.509 18:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:47.774 [2024-07-15 18:46:22.006172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb69a0 is same with the state(5) to be set 00:20:47.774 [2024-07-15 18:46:22.006213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb69a0 is same with the state(5) to be set 00:20:47.774 [2024-07-15 18:46:22.006224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb69a0 is same with the state(5) to be set 00:20:47.774 [2024-07-15 18:46:22.006234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb69a0 is same with the state(5) to be set 00:20:47.774 [2024-07-15 18:46:22.006244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb69a0 is same with the state(5) to be set 00:20:47.774 [2024-07-15 18:46:22.006254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb69a0 is same with the state(5) to be set 00:20:47.774 [2024-07-15 18:46:22.006264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb69a0 is same with the state(5) to be set 00:20:47.774 18:46:22 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88496 00:20:54.387 0 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 88454 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88454 ']' 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88454 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88454 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88454' 00:20:54.387 killing process with pid 88454 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88454 00:20:54.387 18:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88454 00:20:54.387 18:46:28 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:54.387 [2024-07-15 18:46:11.078753] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:54.387 [2024-07-15 18:46:11.078972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88454 ] 00:20:54.387 [2024-07-15 18:46:11.232807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.387 [2024-07-15 18:46:11.342038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.387 Running I/O for 15 seconds... 00:20:54.387 [2024-07-15 18:46:13.936693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.387 [2024-07-15 18:46:13.936740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.936767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.387 [2024-07-15 18:46:13.936783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.936801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.387 [2024-07-15 18:46:13.936817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.936834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.387 [2024-07-15 18:46:13.936850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.936867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.387 [2024-07-15 18:46:13.936882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.936899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.387 [2024-07-15 18:46:13.936914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.936932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.936958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.936976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.936992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.937974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.937991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.938006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.938023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.938039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.938062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.938078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.387 [2024-07-15 18:46:13.938095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.387 [2024-07-15 18:46:13.938110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.938966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.938986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.388 [2024-07-15 18:46:13.939226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.388 [2024-07-15 18:46:13.939482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.388 [2024-07-15 18:46:13.939497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.389 [2024-07-15 18:46:13.939764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.939975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.939992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.389 [2024-07-15 18:46:13.940894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.389 [2024-07-15 18:46:13.940911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:13.940926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.940956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:13.940972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.940989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:13.941010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:13.941043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:13.941075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa55c90 is same with the state(5) to be set 00:20:54.390 [2024-07-15 18:46:13.941112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.390 [2024-07-15 18:46:13.941123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.390 [2024-07-15 18:46:13.941135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82544 len:8 PRP1 0x0 PRP2 0x0 00:20:54.390 [2024-07-15 18:46:13.941150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941222] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa55c90 was disconnected and freed. reset controller. 00:20:54.390 [2024-07-15 18:46:13.941250] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:54.390 [2024-07-15 18:46:13.941326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.390 [2024-07-15 18:46:13.941345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.390 [2024-07-15 18:46:13.941377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.390 [2024-07-15 18:46:13.941438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.390 [2024-07-15 18:46:13.941488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:13.941512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.390 [2024-07-15 18:46:13.944982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.390 [2024-07-15 18:46:13.945039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d9e30 (9): Bad file descriptor 00:20:54.390 [2024-07-15 18:46:13.980925] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:54.390 [2024-07-15 18:46:17.456826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.456880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.456905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.456943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.456973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.456987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.390 [2024-07-15 18:46:17.457729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.390 [2024-07-15 18:46:17.457745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.457971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.457987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.391 [2024-07-15 18:46:17.458793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.391 [2024-07-15 18:46:17.458808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.458827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.458842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.458856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.458872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.458886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.458901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.458914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.458929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.458943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.458969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.392 [2024-07-15 18:46:17.458982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.458997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.392 [2024-07-15 18:46:17.459011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.392 [2024-07-15 18:46:17.459040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.392 [2024-07-15 18:46:17.459070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.459973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.459989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.460002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.460017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.392 [2024-07-15 18:46:17.460031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.460068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.392 [2024-07-15 18:46:17.460080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:20:54.392 [2024-07-15 18:46:17.460094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.392 [2024-07-15 18:46:17.460115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.392 [2024-07-15 18:46:17.460125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68048 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68096 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68104 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68112 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68120 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68128 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68136 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68144 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68152 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67392 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67400 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.460961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.460975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.460985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.460997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67408 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.461011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.461034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.461045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67416 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.461058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.461082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.461092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67424 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.461105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.461129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.461139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67432 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.461153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.393 [2024-07-15 18:46:17.461182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.393 [2024-07-15 18:46:17.461193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67440 len:8 PRP1 0x0 PRP2 0x0 00:20:54.393 [2024-07-15 18:46:17.461206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461268] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa57d90 was disconnected and freed. reset controller. 00:20:54.393 [2024-07-15 18:46:17.461285] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:54.393 [2024-07-15 18:46:17.461355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.393 [2024-07-15 18:46:17.461372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.393 [2024-07-15 18:46:17.461413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.393 [2024-07-15 18:46:17.461441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.393 [2024-07-15 18:46:17.461471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.393 [2024-07-15 18:46:17.461485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.393 [2024-07-15 18:46:17.464675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.393 [2024-07-15 18:46:17.464727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d9e30 (9): Bad file descriptor 00:20:54.394 [2024-07-15 18:46:17.496883] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:54.394 [2024-07-15 18:46:22.005082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.394 [2024-07-15 18:46:22.005156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.005175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.394 [2024-07-15 18:46:22.005190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.005206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.394 [2024-07-15 18:46:22.005220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.005235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.394 [2024-07-15 18:46:22.005249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.005264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d9e30 is same with the state(5) to be set 00:20:54.394 [2024-07-15 18:46:22.007893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.007938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.007980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.007996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.394 [2024-07-15 18:46:22.008229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.394 [2024-07-15 18:46:22.008261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.394 [2024-07-15 18:46:22.008293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.394 [2024-07-15 18:46:22.008325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.394 [2024-07-15 18:46:22.008379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.394 [2024-07-15 18:46:22.008411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.394 [2024-07-15 18:46:22.008443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.008971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.008989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.009004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.009021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.009036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.009053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.009069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.009085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.009111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.009127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.009142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.009158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.009172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.394 [2024-07-15 18:46:22.009188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.394 [2024-07-15 18:46:22.009210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.009966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.009992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.395 [2024-07-15 18:46:22.010403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.395 [2024-07-15 18:46:22.010418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.396 [2024-07-15 18:46:22.010824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44640 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.010890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.010920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.010931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44648 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.010952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.010968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.010978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.010989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44656 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44664 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44672 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44680 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44688 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44696 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44704 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44712 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44720 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44728 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44736 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44744 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44752 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44760 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44768 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44776 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.396 [2024-07-15 18:46:22.011798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.396 [2024-07-15 18:46:22.011808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.396 [2024-07-15 18:46:22.011819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44784 len:8 PRP1 0x0 PRP2 0x0 00:20:54.396 [2024-07-15 18:46:22.011833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.011848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.011859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.011870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.011884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.011899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.011909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.011920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.011934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.011958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.011970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.011981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.011996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44880 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44888 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44896 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44904 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44912 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44168 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44176 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.012942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.012967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.012978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44184 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.012992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.013007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.397 [2024-07-15 18:46:22.013018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.397 [2024-07-15 18:46:22.013029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44192 len:8 PRP1 0x0 PRP2 0x0 00:20:54.397 [2024-07-15 18:46:22.013043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.397 [2024-07-15 18:46:22.013112] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa57b80 was disconnected and freed. reset controller. 00:20:54.397 [2024-07-15 18:46:22.013130] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:54.397 [2024-07-15 18:46:22.013147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.397 [2024-07-15 18:46:22.016538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.397 [2024-07-15 18:46:22.016601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d9e30 (9): Bad file descriptor 00:20:54.397 [2024-07-15 18:46:22.050668] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:54.397 00:20:54.397 Latency(us) 00:20:54.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.398 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:54.398 Verification LBA range: start 0x0 length 0x4000 00:20:54.398 NVMe0n1 : 15.01 9389.94 36.68 254.56 0.00 13245.00 538.33 52179.14 00:20:54.398 =================================================================================================================== 00:20:54.398 Total : 9389.94 36.68 254.56 0.00 13245.00 538.33 52179.14 00:20:54.398 Received shutdown signal, test time was about 15.000000 seconds 00:20:54.398 00:20:54.398 Latency(us) 00:20:54.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.398 =================================================================================================================== 00:20:54.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88700 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88700 /var/tmp/bdevperf.sock 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88700 ']' 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.398 18:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:54.656 18:46:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.656 18:46:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:54.656 18:46:29 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:54.914 [2024-07-15 18:46:29.324544] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:54.914 18:46:29 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:55.172 [2024-07-15 18:46:29.548845] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:55.172 18:46:29 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.429 NVMe0n1 00:20:55.429 18:46:29 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.994 00:20:55.994 18:46:30 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.250 00:20:56.250 18:46:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:56.250 18:46:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:56.506 18:46:30 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.506 18:46:30 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:59.855 18:46:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:59.855 18:46:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.855 18:46:34 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:59.855 18:46:34 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88843 00:20:59.855 18:46:34 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88843 00:21:01.227 0 00:21:01.227 18:46:35 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:01.227 [2024-07-15 18:46:28.120165] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:01.227 [2024-07-15 18:46:28.120899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88700 ] 00:21:01.227 [2024-07-15 18:46:28.262709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.227 [2024-07-15 18:46:28.367521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.227 [2024-07-15 18:46:30.936941] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:01.227 [2024-07-15 18:46:30.937071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.227 [2024-07-15 18:46:30.937094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.227 [2024-07-15 18:46:30.937113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.227 [2024-07-15 18:46:30.937129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.227 [2024-07-15 18:46:30.937144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.227 [2024-07-15 18:46:30.937160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.227 [2024-07-15 18:46:30.937176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.227 [2024-07-15 18:46:30.937191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.227 [2024-07-15 18:46:30.937206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.227 [2024-07-15 18:46:30.937249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.227 [2024-07-15 18:46:30.937276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cde30 (9): Bad file descriptor 00:21:01.227 [2024-07-15 18:46:30.947279] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:01.227 Running I/O for 1 seconds... 00:21:01.227 00:21:01.227 Latency(us) 00:21:01.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.227 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.227 Verification LBA range: start 0x0 length 0x4000 00:21:01.227 NVMe0n1 : 1.00 9669.84 37.77 0.00 0.00 13179.97 2246.95 13356.86 00:21:01.227 =================================================================================================================== 00:21:01.227 Total : 9669.84 37.77 0.00 0.00 13179.97 2246.95 13356.86 00:21:01.227 18:46:35 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:01.227 18:46:35 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:01.227 18:46:35 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.485 18:46:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:01.485 18:46:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:01.743 18:46:36 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.001 18:46:36 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88700 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88700 ']' 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88700 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88700 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:05.283 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:05.283 killing process with pid 88700 00:21:05.284 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88700' 00:21:05.284 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88700 00:21:05.284 18:46:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88700 00:21:05.541 18:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:05.541 18:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.799 rmmod nvme_tcp 00:21:05.799 rmmod nvme_fabrics 00:21:05.799 rmmod nvme_keyring 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 88341 ']' 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 88341 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88341 ']' 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88341 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.799 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88341 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:06.057 killing process with pid 88341 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88341' 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88341 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88341 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.057 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.316 18:46:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:06.316 ************************************ 00:21:06.316 END TEST nvmf_failover 00:21:06.316 ************************************ 00:21:06.316 00:21:06.316 real 0m33.051s 00:21:06.316 user 2m6.827s 00:21:06.316 sys 0m6.020s 00:21:06.316 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.316 18:46:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:06.316 18:46:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:06.316 18:46:40 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:06.316 18:46:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:06.316 18:46:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.316 18:46:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.316 ************************************ 00:21:06.316 START TEST nvmf_host_discovery 00:21:06.316 ************************************ 00:21:06.316 18:46:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:06.316 * Looking for test storage... 00:21:06.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:06.316 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.316 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:06.316 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.316 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:06.317 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:06.575 Cannot find device "nvmf_tgt_br" 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.575 Cannot find device "nvmf_tgt_br2" 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:06.575 Cannot find device "nvmf_tgt_br" 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:06.575 Cannot find device "nvmf_tgt_br2" 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:06.575 18:46:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:06.575 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:06.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:21:06.833 00:21:06.833 --- 10.0.0.2 ping statistics --- 00:21:06.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.833 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:06.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:06.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:06.833 00:21:06.833 --- 10.0.0.3 ping statistics --- 00:21:06.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.833 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:06.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:06.833 00:21:06.833 --- 10.0.0.1 ping statistics --- 00:21:06.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.833 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=89144 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 89144 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 89144 ']' 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.833 18:46:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.833 [2024-07-15 18:46:41.212692] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:06.833 [2024-07-15 18:46:41.212803] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.091 [2024-07-15 18:46:41.349676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.091 [2024-07-15 18:46:41.450769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.091 [2024-07-15 18:46:41.450825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.091 [2024-07-15 18:46:41.450836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.091 [2024-07-15 18:46:41.450845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.091 [2024-07-15 18:46:41.450853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.091 [2024-07-15 18:46:41.450886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.026 [2024-07-15 18:46:42.341766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.026 [2024-07-15 18:46:42.349914] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.026 null0 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.026 null1 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89194 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89194 /tmp/host.sock 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 89194 ']' 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.026 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.026 18:46:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.026 [2024-07-15 18:46:42.450900] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:08.026 [2024-07-15 18:46:42.451079] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89194 ] 00:21:08.295 [2024-07-15 18:46:42.609794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.295 [2024-07-15 18:46:42.770834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:09.230 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 [2024-07-15 18:46:43.846233] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:09.758 18:46:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:09.758 18:46:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:10.326 [2024-07-15 18:46:44.504889] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:10.326 [2024-07-15 18:46:44.504955] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:10.326 [2024-07-15 18:46:44.504973] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:10.326 [2024-07-15 18:46:44.593093] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:10.326 [2024-07-15 18:46:44.658057] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:10.326 [2024-07-15 18:46:44.658100] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:10.892 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.893 [2024-07-15 18:46:45.355326] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:10.893 [2024-07-15 18:46:45.355645] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:10.893 [2024-07-15 18:46:45.355683] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:10.893 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:11.152 [2024-07-15 18:46:45.441683] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:11.152 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.153 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.153 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:11.153 18:46:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:11.153 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.153 [2024-07-15 18:46:45.500024] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:11.153 [2024-07-15 18:46:45.500054] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:11.153 [2024-07-15 18:46:45.500063] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:11.153 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:11.153 18:46:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.089 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.348 [2024-07-15 18:46:46.640741] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:12.348 [2024-07-15 18:46:46.640786] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.348 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:12.348 [2024-07-15 18:46:46.646979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.348 [2024-07-15 18:46:46.647013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.348 [2024-07-15 18:46:46.647026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.348 [2024-07-15 18:46:46.647036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.348 [2024-07-15 18:46:46.647046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.348 [2024-07-15 18:46:46.647055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.348 [2024-07-15 18:46:46.647065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.349 [2024-07-15 18:46:46.647074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.349 [2024-07-15 18:46:46.647083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6c50 is same with the state(5) to be set 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:12.349 [2024-07-15 18:46:46.656921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6c50 (9): Bad file descriptor 00:21:12.349 [2024-07-15 18:46:46.666943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.349 [2024-07-15 18:46:46.667119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.349 [2024-07-15 18:46:46.667143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f6c50 with addr=10.0.0.2, port=4420 00:21:12.349 [2024-07-15 18:46:46.667156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6c50 is same with the state(5) to be set 00:21:12.349 [2024-07-15 18:46:46.667174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6c50 (9): Bad file descriptor 00:21:12.349 [2024-07-15 18:46:46.667189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.349 [2024-07-15 18:46:46.667198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.349 [2024-07-15 18:46:46.667213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.349 [2024-07-15 18:46:46.667231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.349 [2024-07-15 18:46:46.677000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.349 [2024-07-15 18:46:46.677103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.349 [2024-07-15 18:46:46.677119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f6c50 with addr=10.0.0.2, port=4420 00:21:12.349 [2024-07-15 18:46:46.677129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6c50 is same with the state(5) to be set 00:21:12.349 [2024-07-15 18:46:46.677143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6c50 (9): Bad file descriptor 00:21:12.349 [2024-07-15 18:46:46.677156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.349 [2024-07-15 18:46:46.677165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.349 [2024-07-15 18:46:46.677174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.349 [2024-07-15 18:46:46.677187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.349 [2024-07-15 18:46:46.687066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.349 [2024-07-15 18:46:46.687149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.349 [2024-07-15 18:46:46.687167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f6c50 with addr=10.0.0.2, port=4420 00:21:12.349 [2024-07-15 18:46:46.687177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6c50 is same with the state(5) to be set 00:21:12.349 [2024-07-15 18:46:46.687191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6c50 (9): Bad file descriptor 00:21:12.349 [2024-07-15 18:46:46.687204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.349 [2024-07-15 18:46:46.687213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.349 [2024-07-15 18:46:46.687223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.349 [2024-07-15 18:46:46.687235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.349 [2024-07-15 18:46:46.697117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.349 [2024-07-15 18:46:46.697193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.349 [2024-07-15 18:46:46.697208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f6c50 with addr=10.0.0.2, port=4420 00:21:12.349 [2024-07-15 18:46:46.697218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6c50 is same with the state(5) to be set 00:21:12.349 [2024-07-15 18:46:46.697231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6c50 (9): Bad file descriptor 00:21:12.349 [2024-07-15 18:46:46.697245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.349 [2024-07-15 18:46:46.697253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.349 [2024-07-15 18:46:46.697262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.349 [2024-07-15 18:46:46.697274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:12.349 [2024-07-15 18:46:46.707208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.349 [2024-07-15 18:46:46.707303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.349 [2024-07-15 18:46:46.707321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f6c50 with addr=10.0.0.2, port=4420 00:21:12.349 [2024-07-15 18:46:46.707332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6c50 is same with the state(5) to be set 00:21:12.349 [2024-07-15 18:46:46.707346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6c50 (9): Bad file descriptor 00:21:12.349 [2024-07-15 18:46:46.707360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.349 [2024-07-15 18:46:46.707369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.349 [2024-07-15 18:46:46.707380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.349 [2024-07-15 18:46:46.707393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.349 [2024-07-15 18:46:46.717259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:12.349 [2024-07-15 18:46:46.717329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.349 [2024-07-15 18:46:46.717344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f6c50 with addr=10.0.0.2, port=4420 00:21:12.349 [2024-07-15 18:46:46.717354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6c50 is same with the state(5) to be set 00:21:12.349 [2024-07-15 18:46:46.717367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6c50 (9): Bad file descriptor 00:21:12.349 [2024-07-15 18:46:46.717380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:12.349 [2024-07-15 18:46:46.717389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:12.349 [2024-07-15 18:46:46.717399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:12.349 [2024-07-15 18:46:46.717417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.349 [2024-07-15 18:46:46.726240] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:12.349 [2024-07-15 18:46:46.726283] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.349 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.608 18:46:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.608 18:46:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:12.608 18:46:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:12.608 18:46:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:12.608 18:46:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:12.608 18:46:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:12.608 18:46:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.608 18:46:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.986 [2024-07-15 18:46:48.025561] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:13.986 [2024-07-15 18:46:48.025612] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:13.986 [2024-07-15 18:46:48.025630] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:13.986 [2024-07-15 18:46:48.111699] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:13.986 [2024-07-15 18:46:48.172475] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:13.986 [2024-07-15 18:46:48.172550] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.986 2024/07/15 18:46:48 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:13.986 request: 00:21:13.986 { 00:21:13.986 "method": "bdev_nvme_start_discovery", 00:21:13.986 "params": { 00:21:13.986 "name": "nvme", 00:21:13.986 "trtype": "tcp", 00:21:13.986 "traddr": "10.0.0.2", 00:21:13.986 "adrfam": "ipv4", 00:21:13.986 "trsvcid": "8009", 00:21:13.986 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:13.986 "wait_for_attach": true 00:21:13.986 } 00:21:13.986 } 00:21:13.986 Got JSON-RPC error response 00:21:13.986 GoRPCClient: error on JSON-RPC call 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.986 2024/07/15 18:46:48 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:13.986 request: 00:21:13.986 { 00:21:13.986 "method": "bdev_nvme_start_discovery", 00:21:13.986 "params": { 00:21:13.986 "name": "nvme_second", 00:21:13.986 "trtype": "tcp", 00:21:13.986 "traddr": "10.0.0.2", 00:21:13.986 "adrfam": "ipv4", 00:21:13.986 "trsvcid": "8009", 00:21:13.986 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:13.986 "wait_for_attach": true 00:21:13.986 } 00:21:13.986 } 00:21:13.986 Got JSON-RPC error response 00:21:13.986 GoRPCClient: error on JSON-RPC call 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.986 18:46:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.358 [2024-07-15 18:46:49.433159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.358 [2024-07-15 18:46:49.433240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3540 with addr=10.0.0.2, port=8010 00:21:15.358 [2024-07-15 18:46:49.433271] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:15.358 [2024-07-15 18:46:49.433282] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:15.358 [2024-07-15 18:46:49.433294] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:16.293 [2024-07-15 18:46:50.433172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.293 [2024-07-15 18:46:50.433254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3540 with addr=10.0.0.2, port=8010 00:21:16.293 [2024-07-15 18:46:50.433285] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:16.293 [2024-07-15 18:46:50.433296] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:16.293 [2024-07-15 18:46:50.433307] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:17.227 [2024-07-15 18:46:51.432970] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:17.227 2024/07/15 18:46:51 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:17.227 request: 00:21:17.227 { 00:21:17.227 "method": "bdev_nvme_start_discovery", 00:21:17.227 "params": { 00:21:17.227 "name": "nvme_second", 00:21:17.227 "trtype": "tcp", 00:21:17.227 "traddr": "10.0.0.2", 00:21:17.227 "adrfam": "ipv4", 00:21:17.227 "trsvcid": "8010", 00:21:17.227 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:17.227 "wait_for_attach": false, 00:21:17.227 "attach_timeout_ms": 3000 00:21:17.227 } 00:21:17.227 } 00:21:17.227 Got JSON-RPC error response 00:21:17.227 GoRPCClient: error on JSON-RPC call 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89194 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.227 rmmod nvme_tcp 00:21:17.227 rmmod nvme_fabrics 00:21:17.227 rmmod nvme_keyring 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 89144 ']' 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 89144 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 89144 ']' 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 89144 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89144 00:21:17.227 killing process with pid 89144 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89144' 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 89144 00:21:17.227 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 89144 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:17.486 00:21:17.486 real 0m11.285s 00:21:17.486 user 0m21.528s 00:21:17.486 sys 0m2.307s 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:17.486 18:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.486 ************************************ 00:21:17.486 END TEST nvmf_host_discovery 00:21:17.486 ************************************ 00:21:17.486 18:46:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:17.486 18:46:51 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:17.486 18:46:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:17.486 18:46:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.486 18:46:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:17.745 ************************************ 00:21:17.745 START TEST nvmf_host_multipath_status 00:21:17.745 ************************************ 00:21:17.745 18:46:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:17.745 * Looking for test storage... 00:21:17.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:17.745 Cannot find device "nvmf_tgt_br" 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.745 Cannot find device "nvmf_tgt_br2" 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:17.745 Cannot find device "nvmf_tgt_br" 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:17.745 Cannot find device "nvmf_tgt_br2" 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:17.745 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:18.004 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:18.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:21:18.263 00:21:18.263 --- 10.0.0.2 ping statistics --- 00:21:18.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.263 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:18.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:18.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:21:18.263 00:21:18.263 --- 10.0.0.3 ping statistics --- 00:21:18.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.263 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:18.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:21:18.263 00:21:18.263 --- 10.0.0.1 ping statistics --- 00:21:18.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.263 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89678 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89678 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89678 ']' 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.263 18:46:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:18.263 [2024-07-15 18:46:52.607591] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:18.263 [2024-07-15 18:46:52.607724] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.522 [2024-07-15 18:46:52.757160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:18.522 [2024-07-15 18:46:52.947100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.522 [2024-07-15 18:46:52.947187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.522 [2024-07-15 18:46:52.947203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.522 [2024-07-15 18:46:52.947216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.522 [2024-07-15 18:46:52.947228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.522 [2024-07-15 18:46:52.947487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.522 [2024-07-15 18:46:52.947497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.087 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.087 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:19.087 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.087 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.087 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:19.345 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.345 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89678 00:21:19.345 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:19.345 [2024-07-15 18:46:53.811486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.602 18:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:19.602 Malloc0 00:21:19.860 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:19.860 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:20.119 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.377 [2024-07-15 18:46:54.729291] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.377 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:20.634 [2024-07-15 18:46:54.973721] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:20.634 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89783 00:21:20.634 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.634 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89783 /var/tmp/bdevperf.sock 00:21:20.634 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:20.634 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89783 ']' 00:21:20.634 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.635 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.635 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.635 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.635 18:46:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:22.007 18:46:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.007 18:46:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:22.007 18:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:22.007 18:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:22.264 Nvme0n1 00:21:22.264 18:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:22.830 Nvme0n1 00:21:22.830 18:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:22.830 18:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.752 18:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:24.752 18:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:25.010 18:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:25.266 18:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:26.197 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:26.197 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:26.197 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.197 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:26.454 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.455 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:26.455 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.455 18:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:26.712 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:26.712 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:26.712 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.712 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:26.969 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.969 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:26.969 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.969 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:27.225 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.225 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:27.225 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.225 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:27.481 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.481 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:27.481 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:27.481 18:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.755 18:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.755 18:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:27.755 18:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:28.012 18:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:28.269 18:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:29.195 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:29.195 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:29.195 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.196 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:29.452 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:29.452 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:29.452 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.452 18:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:29.709 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.709 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:29.709 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.709 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:29.966 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.966 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:29.966 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.966 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:30.222 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.222 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:30.222 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:30.222 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.479 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.479 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:30.479 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.479 18:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:30.735 18:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.735 18:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:30.735 18:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:30.992 18:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:31.249 18:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:32.618 18:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.895 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:32.895 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:32.895 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:32.895 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.159 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.159 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:33.159 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.159 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:33.416 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.416 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:33.416 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.416 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:33.674 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.674 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:33.674 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.674 18:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:33.931 18:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.931 18:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:33.931 18:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:34.188 18:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:34.445 18:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:35.857 18:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:35.857 18:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:35.857 18:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.857 18:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:35.857 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.857 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:35.857 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.857 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:36.114 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:36.114 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:36.114 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.114 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:36.371 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.371 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:36.371 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.371 18:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:36.628 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.628 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:36.628 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.628 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:36.884 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.884 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:36.884 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.884 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:37.140 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:37.140 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:37.140 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:37.396 18:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:37.654 18:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:38.593 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:38.593 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:38.593 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.593 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:38.850 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:38.850 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:38.850 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.850 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:39.107 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.107 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:39.107 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.107 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:39.364 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.364 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:39.364 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:39.364 18:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.621 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.621 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:39.621 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.621 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:39.878 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.878 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:39.878 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.878 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:40.134 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:40.134 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:40.134 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:40.390 18:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:40.953 18:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:41.934 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:41.934 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:41.934 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.934 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:42.215 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.215 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:42.215 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.215 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:42.472 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.472 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:42.472 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.472 18:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:42.729 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.729 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:42.729 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:42.729 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.986 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.986 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:42.986 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.986 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:43.243 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:43.243 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:43.243 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:43.243 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.501 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.501 18:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:43.757 18:47:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:43.757 18:47:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:44.331 18:47:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:44.331 18:47:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:45.282 18:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:45.282 18:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:45.282 18:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.282 18:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:45.856 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.857 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:45.857 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.857 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:46.113 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.113 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:46.113 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.113 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:46.369 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.369 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:46.369 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.369 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:46.624 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.624 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:46.624 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.624 18:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:46.880 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.880 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:46.880 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:46.880 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.136 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.136 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:47.136 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:47.392 18:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:47.953 18:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:48.882 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:48.882 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:48.882 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.882 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:49.138 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:49.138 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:49.138 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.138 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:49.395 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.395 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:49.395 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.395 18:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:49.710 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.710 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:49.710 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.710 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:49.975 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.975 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:49.975 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.975 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:50.233 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:50.233 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:50.233 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:50.233 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.490 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:50.490 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:50.490 18:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:51.053 18:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:51.338 18:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:52.273 18:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:52.273 18:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:52.273 18:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.273 18:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:52.838 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.838 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:52.838 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.838 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:53.095 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.095 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:53.095 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.095 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:53.352 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.352 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:53.352 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.352 18:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:53.609 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.609 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:53.609 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.609 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:54.183 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.183 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:54.183 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:54.183 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.747 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.747 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:54.747 18:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:55.003 18:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:55.259 18:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:56.285 18:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:56.285 18:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:56.285 18:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.285 18:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:56.847 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.847 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:56.847 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.847 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:57.105 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:57.105 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:57.105 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:57.105 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.363 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.363 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:57.363 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.363 18:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:57.620 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.620 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:57.620 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.620 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:58.184 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.184 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:58.184 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.184 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89783 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89783 ']' 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89783 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89783 00:21:58.442 killing process with pid 89783 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89783' 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89783 00:21:58.442 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89783 00:21:58.442 Connection closed with partial response: 00:21:58.442 00:21:58.442 00:21:58.720 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89783 00:21:58.720 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:58.720 [2024-07-15 18:46:55.042254] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:58.720 [2024-07-15 18:46:55.042366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89783 ] 00:21:58.720 [2024-07-15 18:46:55.177058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.720 [2024-07-15 18:46:55.282466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.720 Running I/O for 90 seconds... 00:21:58.720 [2024-07-15 18:47:11.774035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.720 [2024-07-15 18:47:11.774157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.774869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.774890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.720 [2024-07-15 18:47:11.775643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.720 [2024-07-15 18:47:11.775661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.775687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.775706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.775730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.775748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.775773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.775791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.775816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.775835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.775859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.775877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.775903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.775921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.775961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.775985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.776957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.776977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.721 [2024-07-15 18:47:11.777550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.721 [2024-07-15 18:47:11.777577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.777976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.777998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.778025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.778046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.778074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.778093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.778120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.778140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.778167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.778199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.778225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.778245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.722 [2024-07-15 18:47:11.779859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.779904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.779963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.779990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.722 [2024-07-15 18:47:11.780418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.722 [2024-07-15 18:47:11.780445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.780896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.780939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.780990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.723 [2024-07-15 18:47:11.781328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.781972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.781997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.782015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.782765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.782798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.782828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.782861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.782887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.782906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.782931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.782992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.783010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.783035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.783053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.723 [2024-07-15 18:47:11.783078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.723 [2024-07-15 18:47:11.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.783931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.783982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.724 [2024-07-15 18:47:11.784457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.724 [2024-07-15 18:47:11.784475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.784962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.784983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.785008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.799982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.800837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.800859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.802936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.802974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.803004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.803025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.803055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.725 [2024-07-15 18:47:11.803076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.725 [2024-07-15 18:47:11.803106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.803127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.803976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.803998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.726 [2024-07-15 18:47:11.804822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.804968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.804990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.805019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.805040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.805069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.805089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.805118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.805139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.805168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.805188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.805217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.805238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.805268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.726 [2024-07-15 18:47:11.805288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.726 [2024-07-15 18:47:11.805317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.805338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.805369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.805389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.805442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.805464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.805495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.805515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.805545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.805566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.806942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.806989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.807933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.807970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.727 [2024-07-15 18:47:11.808552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.727 [2024-07-15 18:47:11.808581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.808635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.808684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.808735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.808785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.808835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.808886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.808936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.808970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.728 [2024-07-15 18:47:11.809962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.728 [2024-07-15 18:47:11.809985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.729 [2024-07-15 18:47:11.811609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.729 [2024-07-15 18:47:11.811643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.811664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.811693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.811714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.811744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.811765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.811815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.811844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.811866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.811895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.811916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.811963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.811985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.812036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.812502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.812532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.826927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.826978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.730 [2024-07-15 18:47:11.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.730 [2024-07-15 18:47:11.827543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.730 [2024-07-15 18:47:11.827574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.827595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.827627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.827649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.827680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.827702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.827734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.827764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.827795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.827817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.827848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.827869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.827900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.827922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.827982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.828005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.828037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.828058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.828089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.828111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.828142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.828164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.829938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.829981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.830900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.830934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.831001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.831037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.831084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.831120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.831176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.831211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.831273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.831312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.831360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.731 [2024-07-15 18:47:11.831396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.731 [2024-07-15 18:47:11.831436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.831931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.831981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.832968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.832991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.833023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.833044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.833075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.833097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.833128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.833150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.833181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.833203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.732 [2024-07-15 18:47:11.834581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.732 [2024-07-15 18:47:11.834601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.834974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.733 [2024-07-15 18:47:11.835849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.835891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.733 [2024-07-15 18:47:11.835927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.733 [2024-07-15 18:47:11.835949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.835973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.835996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.734 [2024-07-15 18:47:11.836194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.836628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.836642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.837977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.734 [2024-07-15 18:47:11.837999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.734 [2024-07-15 18:47:11.838015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.838974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.838993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.839010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.854556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.854623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.854679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.854744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.854828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.854884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.854937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.854991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.855046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.855099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.855152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.855204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.855257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.855309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.735 [2024-07-15 18:47:11.855362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.735 [2024-07-15 18:47:11.855383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.855413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.855437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.855468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.855503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.855533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.855555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.855586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.855608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.856975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.856998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.857058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.857117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.857176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.857236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.857295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.736 [2024-07-15 18:47:11.857354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.857974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.857997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.858035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.858056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.858094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.858116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.858154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.858176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.858213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.858235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.858281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.858303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.858340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.736 [2024-07-15 18:47:11.858362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.736 [2024-07-15 18:47:11.858400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.858421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.858481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.858539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.858598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.858657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.858716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.858774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.858833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.858892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.858929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.858964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:11.859343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.859929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.859987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.860011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.860048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.860070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.860109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.860131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:11.860385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:11.860412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.621874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.621960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.621996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.622012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.622868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.622897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.622923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.622940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.622979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.622995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.623017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.623034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.623056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.623098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.623121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:29.623136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.623158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:29.623173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.623195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.737 [2024-07-15 18:47:29.623210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.737 [2024-07-15 18:47:29.623232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.737 [2024-07-15 18:47:29.623248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.623287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.623400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.623644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.623685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.623724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.623964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.623980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.624094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.624131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.624169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.624206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.624440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.624463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.624478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.626664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.626707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.738 [2024-07-15 18:47:29.626746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.626783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.626823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.626860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.626917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.626970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.738 [2024-07-15 18:47:29.626993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.738 [2024-07-15 18:47:29.627008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.627046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.627083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.627159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.627970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.627986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.628024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.628062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.628099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.628176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.628373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.628412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.628524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.628546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.739 [2024-07-15 18:47:29.628562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.629034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.629076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.629114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.629152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.629189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.629237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.739 [2024-07-15 18:47:29.629275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.739 [2024-07-15 18:47:29.629296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.629479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.629516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.629667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.629702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.629726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.630747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.630788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.630826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.630845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.630867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.630883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.630905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.630921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.630959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.630976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.630998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.631598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.631696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.631712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.634032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.634065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.634093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.634109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.634132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.740 [2024-07-15 18:47:29.634149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.634172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.634187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.634209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.634225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.740 [2024-07-15 18:47:29.634265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.740 [2024-07-15 18:47:29.634287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.634705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.634743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.634781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.634858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.634969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.634985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.635030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.635068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.635107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.635145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.635737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.635779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.635817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.635856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.635894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.635933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.635971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.635988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.741 [2024-07-15 18:47:29.636420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.636839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.741 [2024-07-15 18:47:29.636879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.741 [2024-07-15 18:47:29.636901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.636917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.636965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.636982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.637059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.637364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.637537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.637979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.637997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.638074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.742 [2024-07-15 18:47:29.638394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.638963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.742 [2024-07-15 18:47:29.638989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.742 [2024-07-15 18:47:29.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.639043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.639082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.639121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.639161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.639200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.639239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.639289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.639313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.639329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.641741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.641777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.641804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.641820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.641844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.641861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.641885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.641901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.641923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.641939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.641971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.641987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.642026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.642423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.743 [2024-07-15 18:47:29.642461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.642498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.642521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.642537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.644870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.644909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.644939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.644970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.644993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.743 [2024-07-15 18:47:29.645513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.743 [2024-07-15 18:47:29.645536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.645960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.645984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.646000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.646039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.646077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.646277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.646315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.646977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.646993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.647032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.647080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.647119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.647156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.647195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.647232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.647270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.647309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.647351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.647404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.647431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.744 [2024-07-15 18:47:29.647449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.650239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.650276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.650305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.650322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.650358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.650374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.650396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.650435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.744 [2024-07-15 18:47:29.650452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.744 [2024-07-15 18:47:29.650474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.650796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.650841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.650879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.650918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.650969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.650991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.651624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.651704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.651742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.651779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.651808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.745 [2024-07-15 18:47:29.651824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.653293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.653336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.653374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.653412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.653463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.653501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.745 [2024-07-15 18:47:29.653540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.745 [2024-07-15 18:47:29.653562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.653857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.653894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.653931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.653966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.653983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.746 [2024-07-15 18:47:29.655894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.655967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.655984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.656006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.656022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.656044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.656060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.656082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.656098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.656120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.656136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.746 [2024-07-15 18:47:29.656159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.746 [2024-07-15 18:47:29.656174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.656711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.656979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.656995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.657018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.657034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.658032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.658061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.658829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.658853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.658878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.658894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.658916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.658932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.658969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.658985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.659023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.659062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.659114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.659151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.659189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.659228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.747 [2024-07-15 18:47:29.659417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.747 [2024-07-15 18:47:29.659764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.747 [2024-07-15 18:47:29.659789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.659812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.659850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.659866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.659887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.659903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.659926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.659942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.659979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.659995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.660568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.660967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.660990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.661006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.661028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.661044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.662970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.663002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.663029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.663046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.663068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.663084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.663106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.663122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.663154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.663171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.663193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.663209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.663231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.663247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.663269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.748 [2024-07-15 18:47:29.663284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.664158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.664186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.748 [2024-07-15 18:47:29.664212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.748 [2024-07-15 18:47:29.664228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.664880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.664940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.665017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.665054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.665092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.665130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.665167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.665206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.665243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.665828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.665869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.665907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.665958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.665998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.666237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.749 [2024-07-15 18:47:29.666274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.749 [2024-07-15 18:47:29.666413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.749 [2024-07-15 18:47:29.666429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.666450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.666466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.666494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.666510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.666532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.666548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.666570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.666586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.666608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.666624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.666646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.666661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.666683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.666699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.668236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.668279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.668319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.668359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.668560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.668636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.668958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.668982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.668998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.669036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.669074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.669112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.669150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.669187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.669225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.669263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.669300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.669323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.669338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.670984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.671013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.671040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.671057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.671089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.750 [2024-07-15 18:47:29.671106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.671129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.671145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.671167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.750 [2024-07-15 18:47:29.671184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.750 [2024-07-15 18:47:29.671206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.671245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.671284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.671322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.671360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.671871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.671919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.671974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.671990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.672812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.672975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.672998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.673020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.673042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.751 [2024-07-15 18:47:29.673058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.673080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.673096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.673118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.751 [2024-07-15 18:47:29.673136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.751 [2024-07-15 18:47:29.673158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.673174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.673196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.673212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.673234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.673250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.675442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.675481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.675520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.675573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.675841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.675879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.675917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.675968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.675992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.676008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.676357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.676395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.676433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.676499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.676516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.678126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.752 [2024-07-15 18:47:29.678186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.678225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.678264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.678302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.678341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.678380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.752 [2024-07-15 18:47:29.678418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.752 [2024-07-15 18:47:29.678440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.678757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.678772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.680616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.680653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.680680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.680696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.680718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.680734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.680755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.753 [2024-07-15 18:47:29.680771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.753 [2024-07-15 18:47:29.680793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.753 [2024-07-15 18:47:29.680809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.753 Received shutdown signal, test time was about 35.542903 seconds 00:21:58.753 00:21:58.753 Latency(us) 00:21:58.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.753 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.753 Verification LBA range: start 0x0 length 0x4000 00:21:58.753 Nvme0n1 : 35.54 8938.58 34.92 0.00 0.00 14293.72 161.89 4122401.65 00:21:58.753 =================================================================================================================== 00:21:58.753 Total : 8938.58 34.92 0.00 0.00 14293.72 161.89 4122401.65 00:21:58.753 18:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.010 rmmod nvme_tcp 00:21:59.010 rmmod nvme_fabrics 00:21:59.010 rmmod nvme_keyring 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89678 ']' 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89678 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89678 ']' 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89678 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89678 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:59.010 killing process with pid 89678 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89678' 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89678 00:21:59.010 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89678 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:59.268 00:21:59.268 real 0m41.634s 00:21:59.268 user 2m13.344s 00:21:59.268 sys 0m13.178s 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.268 18:47:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:59.268 ************************************ 00:21:59.268 END TEST nvmf_host_multipath_status 00:21:59.268 ************************************ 00:21:59.268 18:47:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:59.268 18:47:33 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:59.268 18:47:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.268 18:47:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.268 18:47:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.268 ************************************ 00:21:59.268 START TEST nvmf_discovery_remove_ifc 00:21:59.268 ************************************ 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:59.268 * Looking for test storage... 00:21:59.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.268 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:59.527 Cannot find device "nvmf_tgt_br" 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.527 Cannot find device "nvmf_tgt_br2" 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:59.527 Cannot find device "nvmf_tgt_br" 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:59.527 Cannot find device "nvmf_tgt_br2" 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:21:59.527 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.528 18:47:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:59.528 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.785 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:59.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:21:59.786 00:21:59.786 --- 10.0.0.2 ping statistics --- 00:21:59.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.786 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:59.786 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.786 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:21:59.786 00:21:59.786 --- 10.0.0.3 ping statistics --- 00:21:59.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.786 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:21:59.786 00:21:59.786 --- 10.0.0.1 ping statistics --- 00:21:59.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.786 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=91099 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 91099 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 91099 ']' 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.786 18:47:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.786 [2024-07-15 18:47:34.230146] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:59.786 [2024-07-15 18:47:34.230309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.043 [2024-07-15 18:47:34.381104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.043 [2024-07-15 18:47:34.501014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.043 [2024-07-15 18:47:34.501066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.043 [2024-07-15 18:47:34.501077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.043 [2024-07-15 18:47:34.501087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.043 [2024-07-15 18:47:34.501095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.043 [2024-07-15 18:47:34.501131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.977 [2024-07-15 18:47:35.218984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.977 [2024-07-15 18:47:35.227111] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:00.977 null0 00:22:00.977 [2024-07-15 18:47:35.259096] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91151 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91151 /tmp/host.sock 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 91151 ']' 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:00.977 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.977 18:47:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.977 [2024-07-15 18:47:35.330886] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:00.977 [2024-07-15 18:47:35.330983] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91151 ] 00:22:01.234 [2024-07-15 18:47:35.471753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.235 [2024-07-15 18:47:35.580919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.800 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:02.057 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.057 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:02.057 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.057 18:47:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:02.993 [2024-07-15 18:47:37.361956] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:02.993 [2024-07-15 18:47:37.362001] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:02.993 [2024-07-15 18:47:37.362020] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:02.993 [2024-07-15 18:47:37.449133] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:03.257 [2024-07-15 18:47:37.506430] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:03.257 [2024-07-15 18:47:37.506523] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:03.257 [2024-07-15 18:47:37.506548] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:03.257 [2024-07-15 18:47:37.506568] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:03.257 [2024-07-15 18:47:37.506594] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:03.257 [2024-07-15 18:47:37.510664] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1528650 was disconnected and freed. delete nvme_qpair. 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.257 18:47:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.190 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.190 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.190 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.190 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.190 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.190 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.190 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.447 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.447 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.447 18:47:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.398 18:47:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:06.329 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.588 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:06.588 18:47:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.521 18:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.453 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.453 [2024-07-15 18:47:42.935167] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:08.453 [2024-07-15 18:47:42.935238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.711 [2024-07-15 18:47:42.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.711 [2024-07-15 18:47:42.935269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.711 [2024-07-15 18:47:42.935280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.711 [2024-07-15 18:47:42.935290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.711 [2024-07-15 18:47:42.935301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.711 [2024-07-15 18:47:42.935312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.711 [2024-07-15 18:47:42.935322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.711 [2024-07-15 18:47:42.935333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.711 [2024-07-15 18:47:42.935343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.711 [2024-07-15 18:47:42.935353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1900 is same with the state(5) to be set 00:22:08.711 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.711 18:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.711 [2024-07-15 18:47:42.945166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f1900 (9): Bad file descriptor 00:22:08.711 [2024-07-15 18:47:42.955185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.643 [2024-07-15 18:47:43.975033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:09.643 [2024-07-15 18:47:43.975186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f1900 with addr=10.0.0.2, port=4420 00:22:09.643 [2024-07-15 18:47:43.975231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1900 is same with the state(5) to be set 00:22:09.643 [2024-07-15 18:47:43.975322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f1900 (9): Bad file descriptor 00:22:09.643 [2024-07-15 18:47:43.976383] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:09.643 [2024-07-15 18:47:43.976468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.643 [2024-07-15 18:47:43.976499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.643 [2024-07-15 18:47:43.976530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.643 [2024-07-15 18:47:43.976603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.643 [2024-07-15 18:47:43.976633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.643 18:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.643 18:47:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:09.643 18:47:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.588 [2024-07-15 18:47:44.976713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:10.588 [2024-07-15 18:47:44.976771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:10.588 [2024-07-15 18:47:44.976800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:10.588 [2024-07-15 18:47:44.976811] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:10.588 [2024-07-15 18:47:44.976832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.588 [2024-07-15 18:47:44.976862] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:10.588 [2024-07-15 18:47:44.976915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.588 [2024-07-15 18:47:44.976930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.588 [2024-07-15 18:47:44.976944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.588 [2024-07-15 18:47:44.976954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.588 [2024-07-15 18:47:44.976976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.588 [2024-07-15 18:47:44.976987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.588 [2024-07-15 18:47:44.976997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.588 [2024-07-15 18:47:44.977007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.588 [2024-07-15 18:47:44.977018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.588 [2024-07-15 18:47:44.977028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.588 [2024-07-15 18:47:44.977038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:10.588 [2024-07-15 18:47:44.977345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14943e0 (9): Bad file descriptor 00:22:10.588 [2024-07-15 18:47:44.978357] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:10.588 [2024-07-15 18:47:44.978380] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.588 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:10.846 18:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:11.780 18:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:12.714 [2024-07-15 18:47:46.983892] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.714 [2024-07-15 18:47:46.983929] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.714 [2024-07-15 18:47:46.983961] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.714 [2024-07-15 18:47:47.072034] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:12.714 [2024-07-15 18:47:47.135146] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:12.714 [2024-07-15 18:47:47.135237] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:12.714 [2024-07-15 18:47:47.135260] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:12.714 [2024-07-15 18:47:47.135280] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:12.714 [2024-07-15 18:47:47.135290] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:12.714 [2024-07-15 18:47:47.142640] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x150d300 was disconnected and freed. delete nvme_qpair. 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91151 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 91151 ']' 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 91151 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91151 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91151' 00:22:12.973 killing process with pid 91151 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 91151 00:22:12.973 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 91151 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.231 rmmod nvme_tcp 00:22:13.231 rmmod nvme_fabrics 00:22:13.231 rmmod nvme_keyring 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 91099 ']' 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 91099 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 91099 ']' 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 91099 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91099 00:22:13.231 killing process with pid 91099 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91099' 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 91099 00:22:13.231 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 91099 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:13.488 00:22:13.488 real 0m14.212s 00:22:13.488 user 0m24.914s 00:22:13.488 sys 0m2.169s 00:22:13.488 ************************************ 00:22:13.488 END TEST nvmf_discovery_remove_ifc 00:22:13.488 ************************************ 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.488 18:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.488 18:47:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:13.488 18:47:47 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:13.488 18:47:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.488 18:47:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.488 18:47:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.489 ************************************ 00:22:13.489 START TEST nvmf_identify_kernel_target 00:22:13.489 ************************************ 00:22:13.489 18:47:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:13.747 * Looking for test storage... 00:22:13.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.747 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:13.748 Cannot find device "nvmf_tgt_br" 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.748 Cannot find device "nvmf_tgt_br2" 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:13.748 Cannot find device "nvmf_tgt_br" 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:13.748 Cannot find device "nvmf_tgt_br2" 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:13.748 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:14.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:22:14.051 00:22:14.051 --- 10.0.0.2 ping statistics --- 00:22:14.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.051 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:14.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:22:14.051 00:22:14.051 --- 10.0.0.3 ping statistics --- 00:22:14.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.051 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:14.051 00:22:14.051 --- 10.0.0.1 ping statistics --- 00:22:14.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.051 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:14.051 18:47:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:14.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:14.593 Waiting for block devices as requested 00:22:14.593 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:14.593 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:14.593 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:14.852 No valid GPT data, bailing 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:14.852 No valid GPT data, bailing 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:14.852 No valid GPT data, bailing 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:14.852 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:15.110 No valid GPT data, bailing 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:15.110 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:15.111 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:15.111 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:15.111 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:15.111 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:15.111 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -a 10.0.0.1 -t tcp -s 4420 00:22:15.111 00:22:15.111 Discovery Log Number of Records 2, Generation counter 2 00:22:15.111 =====Discovery Log Entry 0====== 00:22:15.111 trtype: tcp 00:22:15.111 adrfam: ipv4 00:22:15.111 subtype: current discovery subsystem 00:22:15.111 treq: not specified, sq flow control disable supported 00:22:15.111 portid: 1 00:22:15.111 trsvcid: 4420 00:22:15.111 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:15.111 traddr: 10.0.0.1 00:22:15.111 eflags: none 00:22:15.111 sectype: none 00:22:15.111 =====Discovery Log Entry 1====== 00:22:15.111 trtype: tcp 00:22:15.111 adrfam: ipv4 00:22:15.111 subtype: nvme subsystem 00:22:15.111 treq: not specified, sq flow control disable supported 00:22:15.111 portid: 1 00:22:15.111 trsvcid: 4420 00:22:15.111 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:15.111 traddr: 10.0.0.1 00:22:15.111 eflags: none 00:22:15.111 sectype: none 00:22:15.111 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:15.111 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:15.369 ===================================================== 00:22:15.369 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:15.369 ===================================================== 00:22:15.369 Controller Capabilities/Features 00:22:15.369 ================================ 00:22:15.369 Vendor ID: 0000 00:22:15.369 Subsystem Vendor ID: 0000 00:22:15.369 Serial Number: ff7626d95c261e28426e 00:22:15.369 Model Number: Linux 00:22:15.369 Firmware Version: 6.7.0-68 00:22:15.369 Recommended Arb Burst: 0 00:22:15.369 IEEE OUI Identifier: 00 00 00 00:22:15.369 Multi-path I/O 00:22:15.369 May have multiple subsystem ports: No 00:22:15.369 May have multiple controllers: No 00:22:15.369 Associated with SR-IOV VF: No 00:22:15.369 Max Data Transfer Size: Unlimited 00:22:15.369 Max Number of Namespaces: 0 00:22:15.369 Max Number of I/O Queues: 1024 00:22:15.369 NVMe Specification Version (VS): 1.3 00:22:15.369 NVMe Specification Version (Identify): 1.3 00:22:15.369 Maximum Queue Entries: 1024 00:22:15.369 Contiguous Queues Required: No 00:22:15.369 Arbitration Mechanisms Supported 00:22:15.369 Weighted Round Robin: Not Supported 00:22:15.369 Vendor Specific: Not Supported 00:22:15.369 Reset Timeout: 7500 ms 00:22:15.369 Doorbell Stride: 4 bytes 00:22:15.369 NVM Subsystem Reset: Not Supported 00:22:15.369 Command Sets Supported 00:22:15.369 NVM Command Set: Supported 00:22:15.369 Boot Partition: Not Supported 00:22:15.369 Memory Page Size Minimum: 4096 bytes 00:22:15.369 Memory Page Size Maximum: 4096 bytes 00:22:15.369 Persistent Memory Region: Not Supported 00:22:15.369 Optional Asynchronous Events Supported 00:22:15.369 Namespace Attribute Notices: Not Supported 00:22:15.369 Firmware Activation Notices: Not Supported 00:22:15.369 ANA Change Notices: Not Supported 00:22:15.369 PLE Aggregate Log Change Notices: Not Supported 00:22:15.369 LBA Status Info Alert Notices: Not Supported 00:22:15.369 EGE Aggregate Log Change Notices: Not Supported 00:22:15.369 Normal NVM Subsystem Shutdown event: Not Supported 00:22:15.369 Zone Descriptor Change Notices: Not Supported 00:22:15.369 Discovery Log Change Notices: Supported 00:22:15.369 Controller Attributes 00:22:15.369 128-bit Host Identifier: Not Supported 00:22:15.369 Non-Operational Permissive Mode: Not Supported 00:22:15.369 NVM Sets: Not Supported 00:22:15.369 Read Recovery Levels: Not Supported 00:22:15.369 Endurance Groups: Not Supported 00:22:15.369 Predictable Latency Mode: Not Supported 00:22:15.369 Traffic Based Keep ALive: Not Supported 00:22:15.369 Namespace Granularity: Not Supported 00:22:15.369 SQ Associations: Not Supported 00:22:15.369 UUID List: Not Supported 00:22:15.369 Multi-Domain Subsystem: Not Supported 00:22:15.369 Fixed Capacity Management: Not Supported 00:22:15.369 Variable Capacity Management: Not Supported 00:22:15.369 Delete Endurance Group: Not Supported 00:22:15.369 Delete NVM Set: Not Supported 00:22:15.369 Extended LBA Formats Supported: Not Supported 00:22:15.369 Flexible Data Placement Supported: Not Supported 00:22:15.369 00:22:15.369 Controller Memory Buffer Support 00:22:15.369 ================================ 00:22:15.369 Supported: No 00:22:15.369 00:22:15.369 Persistent Memory Region Support 00:22:15.369 ================================ 00:22:15.369 Supported: No 00:22:15.369 00:22:15.369 Admin Command Set Attributes 00:22:15.370 ============================ 00:22:15.370 Security Send/Receive: Not Supported 00:22:15.370 Format NVM: Not Supported 00:22:15.370 Firmware Activate/Download: Not Supported 00:22:15.370 Namespace Management: Not Supported 00:22:15.370 Device Self-Test: Not Supported 00:22:15.370 Directives: Not Supported 00:22:15.370 NVMe-MI: Not Supported 00:22:15.370 Virtualization Management: Not Supported 00:22:15.370 Doorbell Buffer Config: Not Supported 00:22:15.370 Get LBA Status Capability: Not Supported 00:22:15.370 Command & Feature Lockdown Capability: Not Supported 00:22:15.370 Abort Command Limit: 1 00:22:15.370 Async Event Request Limit: 1 00:22:15.370 Number of Firmware Slots: N/A 00:22:15.370 Firmware Slot 1 Read-Only: N/A 00:22:15.370 Firmware Activation Without Reset: N/A 00:22:15.370 Multiple Update Detection Support: N/A 00:22:15.370 Firmware Update Granularity: No Information Provided 00:22:15.370 Per-Namespace SMART Log: No 00:22:15.370 Asymmetric Namespace Access Log Page: Not Supported 00:22:15.370 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:15.370 Command Effects Log Page: Not Supported 00:22:15.370 Get Log Page Extended Data: Supported 00:22:15.370 Telemetry Log Pages: Not Supported 00:22:15.370 Persistent Event Log Pages: Not Supported 00:22:15.370 Supported Log Pages Log Page: May Support 00:22:15.370 Commands Supported & Effects Log Page: Not Supported 00:22:15.370 Feature Identifiers & Effects Log Page:May Support 00:22:15.370 NVMe-MI Commands & Effects Log Page: May Support 00:22:15.370 Data Area 4 for Telemetry Log: Not Supported 00:22:15.370 Error Log Page Entries Supported: 1 00:22:15.370 Keep Alive: Not Supported 00:22:15.370 00:22:15.370 NVM Command Set Attributes 00:22:15.370 ========================== 00:22:15.370 Submission Queue Entry Size 00:22:15.370 Max: 1 00:22:15.370 Min: 1 00:22:15.370 Completion Queue Entry Size 00:22:15.370 Max: 1 00:22:15.370 Min: 1 00:22:15.370 Number of Namespaces: 0 00:22:15.370 Compare Command: Not Supported 00:22:15.370 Write Uncorrectable Command: Not Supported 00:22:15.370 Dataset Management Command: Not Supported 00:22:15.370 Write Zeroes Command: Not Supported 00:22:15.370 Set Features Save Field: Not Supported 00:22:15.370 Reservations: Not Supported 00:22:15.370 Timestamp: Not Supported 00:22:15.370 Copy: Not Supported 00:22:15.370 Volatile Write Cache: Not Present 00:22:15.370 Atomic Write Unit (Normal): 1 00:22:15.370 Atomic Write Unit (PFail): 1 00:22:15.370 Atomic Compare & Write Unit: 1 00:22:15.370 Fused Compare & Write: Not Supported 00:22:15.370 Scatter-Gather List 00:22:15.370 SGL Command Set: Supported 00:22:15.370 SGL Keyed: Not Supported 00:22:15.370 SGL Bit Bucket Descriptor: Not Supported 00:22:15.370 SGL Metadata Pointer: Not Supported 00:22:15.370 Oversized SGL: Not Supported 00:22:15.370 SGL Metadata Address: Not Supported 00:22:15.370 SGL Offset: Supported 00:22:15.370 Transport SGL Data Block: Not Supported 00:22:15.370 Replay Protected Memory Block: Not Supported 00:22:15.370 00:22:15.370 Firmware Slot Information 00:22:15.370 ========================= 00:22:15.370 Active slot: 0 00:22:15.370 00:22:15.370 00:22:15.370 Error Log 00:22:15.370 ========= 00:22:15.370 00:22:15.370 Active Namespaces 00:22:15.370 ================= 00:22:15.370 Discovery Log Page 00:22:15.370 ================== 00:22:15.370 Generation Counter: 2 00:22:15.370 Number of Records: 2 00:22:15.370 Record Format: 0 00:22:15.370 00:22:15.370 Discovery Log Entry 0 00:22:15.370 ---------------------- 00:22:15.370 Transport Type: 3 (TCP) 00:22:15.370 Address Family: 1 (IPv4) 00:22:15.370 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:15.370 Entry Flags: 00:22:15.370 Duplicate Returned Information: 0 00:22:15.370 Explicit Persistent Connection Support for Discovery: 0 00:22:15.370 Transport Requirements: 00:22:15.370 Secure Channel: Not Specified 00:22:15.370 Port ID: 1 (0x0001) 00:22:15.370 Controller ID: 65535 (0xffff) 00:22:15.370 Admin Max SQ Size: 32 00:22:15.370 Transport Service Identifier: 4420 00:22:15.370 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:15.370 Transport Address: 10.0.0.1 00:22:15.370 Discovery Log Entry 1 00:22:15.370 ---------------------- 00:22:15.370 Transport Type: 3 (TCP) 00:22:15.370 Address Family: 1 (IPv4) 00:22:15.370 Subsystem Type: 2 (NVM Subsystem) 00:22:15.370 Entry Flags: 00:22:15.370 Duplicate Returned Information: 0 00:22:15.370 Explicit Persistent Connection Support for Discovery: 0 00:22:15.370 Transport Requirements: 00:22:15.370 Secure Channel: Not Specified 00:22:15.370 Port ID: 1 (0x0001) 00:22:15.370 Controller ID: 65535 (0xffff) 00:22:15.370 Admin Max SQ Size: 32 00:22:15.370 Transport Service Identifier: 4420 00:22:15.370 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:15.370 Transport Address: 10.0.0.1 00:22:15.370 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:15.370 get_feature(0x01) failed 00:22:15.370 get_feature(0x02) failed 00:22:15.370 get_feature(0x04) failed 00:22:15.370 ===================================================== 00:22:15.370 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:15.370 ===================================================== 00:22:15.370 Controller Capabilities/Features 00:22:15.370 ================================ 00:22:15.370 Vendor ID: 0000 00:22:15.370 Subsystem Vendor ID: 0000 00:22:15.370 Serial Number: dd2a759b8767f501d3cb 00:22:15.370 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:15.370 Firmware Version: 6.7.0-68 00:22:15.370 Recommended Arb Burst: 6 00:22:15.370 IEEE OUI Identifier: 00 00 00 00:22:15.370 Multi-path I/O 00:22:15.370 May have multiple subsystem ports: Yes 00:22:15.370 May have multiple controllers: Yes 00:22:15.370 Associated with SR-IOV VF: No 00:22:15.370 Max Data Transfer Size: Unlimited 00:22:15.370 Max Number of Namespaces: 1024 00:22:15.370 Max Number of I/O Queues: 128 00:22:15.370 NVMe Specification Version (VS): 1.3 00:22:15.370 NVMe Specification Version (Identify): 1.3 00:22:15.370 Maximum Queue Entries: 1024 00:22:15.370 Contiguous Queues Required: No 00:22:15.370 Arbitration Mechanisms Supported 00:22:15.370 Weighted Round Robin: Not Supported 00:22:15.370 Vendor Specific: Not Supported 00:22:15.370 Reset Timeout: 7500 ms 00:22:15.370 Doorbell Stride: 4 bytes 00:22:15.370 NVM Subsystem Reset: Not Supported 00:22:15.370 Command Sets Supported 00:22:15.370 NVM Command Set: Supported 00:22:15.370 Boot Partition: Not Supported 00:22:15.370 Memory Page Size Minimum: 4096 bytes 00:22:15.370 Memory Page Size Maximum: 4096 bytes 00:22:15.370 Persistent Memory Region: Not Supported 00:22:15.370 Optional Asynchronous Events Supported 00:22:15.370 Namespace Attribute Notices: Supported 00:22:15.370 Firmware Activation Notices: Not Supported 00:22:15.370 ANA Change Notices: Supported 00:22:15.370 PLE Aggregate Log Change Notices: Not Supported 00:22:15.370 LBA Status Info Alert Notices: Not Supported 00:22:15.370 EGE Aggregate Log Change Notices: Not Supported 00:22:15.370 Normal NVM Subsystem Shutdown event: Not Supported 00:22:15.370 Zone Descriptor Change Notices: Not Supported 00:22:15.370 Discovery Log Change Notices: Not Supported 00:22:15.370 Controller Attributes 00:22:15.370 128-bit Host Identifier: Supported 00:22:15.370 Non-Operational Permissive Mode: Not Supported 00:22:15.370 NVM Sets: Not Supported 00:22:15.370 Read Recovery Levels: Not Supported 00:22:15.370 Endurance Groups: Not Supported 00:22:15.370 Predictable Latency Mode: Not Supported 00:22:15.370 Traffic Based Keep ALive: Supported 00:22:15.370 Namespace Granularity: Not Supported 00:22:15.370 SQ Associations: Not Supported 00:22:15.370 UUID List: Not Supported 00:22:15.370 Multi-Domain Subsystem: Not Supported 00:22:15.370 Fixed Capacity Management: Not Supported 00:22:15.370 Variable Capacity Management: Not Supported 00:22:15.370 Delete Endurance Group: Not Supported 00:22:15.370 Delete NVM Set: Not Supported 00:22:15.370 Extended LBA Formats Supported: Not Supported 00:22:15.370 Flexible Data Placement Supported: Not Supported 00:22:15.370 00:22:15.370 Controller Memory Buffer Support 00:22:15.370 ================================ 00:22:15.370 Supported: No 00:22:15.370 00:22:15.370 Persistent Memory Region Support 00:22:15.370 ================================ 00:22:15.370 Supported: No 00:22:15.370 00:22:15.370 Admin Command Set Attributes 00:22:15.370 ============================ 00:22:15.370 Security Send/Receive: Not Supported 00:22:15.370 Format NVM: Not Supported 00:22:15.370 Firmware Activate/Download: Not Supported 00:22:15.370 Namespace Management: Not Supported 00:22:15.370 Device Self-Test: Not Supported 00:22:15.370 Directives: Not Supported 00:22:15.370 NVMe-MI: Not Supported 00:22:15.370 Virtualization Management: Not Supported 00:22:15.370 Doorbell Buffer Config: Not Supported 00:22:15.370 Get LBA Status Capability: Not Supported 00:22:15.371 Command & Feature Lockdown Capability: Not Supported 00:22:15.371 Abort Command Limit: 4 00:22:15.371 Async Event Request Limit: 4 00:22:15.371 Number of Firmware Slots: N/A 00:22:15.371 Firmware Slot 1 Read-Only: N/A 00:22:15.371 Firmware Activation Without Reset: N/A 00:22:15.371 Multiple Update Detection Support: N/A 00:22:15.371 Firmware Update Granularity: No Information Provided 00:22:15.371 Per-Namespace SMART Log: Yes 00:22:15.371 Asymmetric Namespace Access Log Page: Supported 00:22:15.371 ANA Transition Time : 10 sec 00:22:15.371 00:22:15.371 Asymmetric Namespace Access Capabilities 00:22:15.371 ANA Optimized State : Supported 00:22:15.371 ANA Non-Optimized State : Supported 00:22:15.371 ANA Inaccessible State : Supported 00:22:15.371 ANA Persistent Loss State : Supported 00:22:15.371 ANA Change State : Supported 00:22:15.371 ANAGRPID is not changed : No 00:22:15.371 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:15.371 00:22:15.371 ANA Group Identifier Maximum : 128 00:22:15.371 Number of ANA Group Identifiers : 128 00:22:15.371 Max Number of Allowed Namespaces : 1024 00:22:15.371 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:15.371 Command Effects Log Page: Supported 00:22:15.371 Get Log Page Extended Data: Supported 00:22:15.371 Telemetry Log Pages: Not Supported 00:22:15.371 Persistent Event Log Pages: Not Supported 00:22:15.371 Supported Log Pages Log Page: May Support 00:22:15.371 Commands Supported & Effects Log Page: Not Supported 00:22:15.371 Feature Identifiers & Effects Log Page:May Support 00:22:15.371 NVMe-MI Commands & Effects Log Page: May Support 00:22:15.371 Data Area 4 for Telemetry Log: Not Supported 00:22:15.371 Error Log Page Entries Supported: 128 00:22:15.371 Keep Alive: Supported 00:22:15.371 Keep Alive Granularity: 1000 ms 00:22:15.371 00:22:15.371 NVM Command Set Attributes 00:22:15.371 ========================== 00:22:15.371 Submission Queue Entry Size 00:22:15.371 Max: 64 00:22:15.371 Min: 64 00:22:15.371 Completion Queue Entry Size 00:22:15.371 Max: 16 00:22:15.371 Min: 16 00:22:15.371 Number of Namespaces: 1024 00:22:15.371 Compare Command: Not Supported 00:22:15.371 Write Uncorrectable Command: Not Supported 00:22:15.371 Dataset Management Command: Supported 00:22:15.371 Write Zeroes Command: Supported 00:22:15.371 Set Features Save Field: Not Supported 00:22:15.371 Reservations: Not Supported 00:22:15.371 Timestamp: Not Supported 00:22:15.371 Copy: Not Supported 00:22:15.371 Volatile Write Cache: Present 00:22:15.371 Atomic Write Unit (Normal): 1 00:22:15.371 Atomic Write Unit (PFail): 1 00:22:15.371 Atomic Compare & Write Unit: 1 00:22:15.371 Fused Compare & Write: Not Supported 00:22:15.371 Scatter-Gather List 00:22:15.371 SGL Command Set: Supported 00:22:15.371 SGL Keyed: Not Supported 00:22:15.371 SGL Bit Bucket Descriptor: Not Supported 00:22:15.371 SGL Metadata Pointer: Not Supported 00:22:15.371 Oversized SGL: Not Supported 00:22:15.371 SGL Metadata Address: Not Supported 00:22:15.371 SGL Offset: Supported 00:22:15.371 Transport SGL Data Block: Not Supported 00:22:15.371 Replay Protected Memory Block: Not Supported 00:22:15.371 00:22:15.371 Firmware Slot Information 00:22:15.371 ========================= 00:22:15.371 Active slot: 0 00:22:15.371 00:22:15.371 Asymmetric Namespace Access 00:22:15.371 =========================== 00:22:15.371 Change Count : 0 00:22:15.371 Number of ANA Group Descriptors : 1 00:22:15.371 ANA Group Descriptor : 0 00:22:15.371 ANA Group ID : 1 00:22:15.371 Number of NSID Values : 1 00:22:15.371 Change Count : 0 00:22:15.371 ANA State : 1 00:22:15.371 Namespace Identifier : 1 00:22:15.371 00:22:15.371 Commands Supported and Effects 00:22:15.371 ============================== 00:22:15.371 Admin Commands 00:22:15.371 -------------- 00:22:15.371 Get Log Page (02h): Supported 00:22:15.371 Identify (06h): Supported 00:22:15.371 Abort (08h): Supported 00:22:15.371 Set Features (09h): Supported 00:22:15.371 Get Features (0Ah): Supported 00:22:15.371 Asynchronous Event Request (0Ch): Supported 00:22:15.371 Keep Alive (18h): Supported 00:22:15.371 I/O Commands 00:22:15.371 ------------ 00:22:15.371 Flush (00h): Supported 00:22:15.371 Write (01h): Supported LBA-Change 00:22:15.371 Read (02h): Supported 00:22:15.371 Write Zeroes (08h): Supported LBA-Change 00:22:15.371 Dataset Management (09h): Supported 00:22:15.371 00:22:15.371 Error Log 00:22:15.371 ========= 00:22:15.371 Entry: 0 00:22:15.371 Error Count: 0x3 00:22:15.371 Submission Queue Id: 0x0 00:22:15.371 Command Id: 0x5 00:22:15.371 Phase Bit: 0 00:22:15.371 Status Code: 0x2 00:22:15.371 Status Code Type: 0x0 00:22:15.371 Do Not Retry: 1 00:22:15.371 Error Location: 0x28 00:22:15.371 LBA: 0x0 00:22:15.371 Namespace: 0x0 00:22:15.371 Vendor Log Page: 0x0 00:22:15.371 ----------- 00:22:15.371 Entry: 1 00:22:15.371 Error Count: 0x2 00:22:15.371 Submission Queue Id: 0x0 00:22:15.371 Command Id: 0x5 00:22:15.371 Phase Bit: 0 00:22:15.371 Status Code: 0x2 00:22:15.371 Status Code Type: 0x0 00:22:15.371 Do Not Retry: 1 00:22:15.371 Error Location: 0x28 00:22:15.371 LBA: 0x0 00:22:15.371 Namespace: 0x0 00:22:15.371 Vendor Log Page: 0x0 00:22:15.371 ----------- 00:22:15.371 Entry: 2 00:22:15.371 Error Count: 0x1 00:22:15.371 Submission Queue Id: 0x0 00:22:15.371 Command Id: 0x4 00:22:15.371 Phase Bit: 0 00:22:15.371 Status Code: 0x2 00:22:15.371 Status Code Type: 0x0 00:22:15.371 Do Not Retry: 1 00:22:15.371 Error Location: 0x28 00:22:15.371 LBA: 0x0 00:22:15.371 Namespace: 0x0 00:22:15.371 Vendor Log Page: 0x0 00:22:15.371 00:22:15.371 Number of Queues 00:22:15.371 ================ 00:22:15.371 Number of I/O Submission Queues: 128 00:22:15.371 Number of I/O Completion Queues: 128 00:22:15.371 00:22:15.371 ZNS Specific Controller Data 00:22:15.371 ============================ 00:22:15.371 Zone Append Size Limit: 0 00:22:15.371 00:22:15.371 00:22:15.371 Active Namespaces 00:22:15.371 ================= 00:22:15.371 get_feature(0x05) failed 00:22:15.371 Namespace ID:1 00:22:15.371 Command Set Identifier: NVM (00h) 00:22:15.371 Deallocate: Supported 00:22:15.371 Deallocated/Unwritten Error: Not Supported 00:22:15.371 Deallocated Read Value: Unknown 00:22:15.371 Deallocate in Write Zeroes: Not Supported 00:22:15.371 Deallocated Guard Field: 0xFFFF 00:22:15.371 Flush: Supported 00:22:15.371 Reservation: Not Supported 00:22:15.371 Namespace Sharing Capabilities: Multiple Controllers 00:22:15.371 Size (in LBAs): 1310720 (5GiB) 00:22:15.371 Capacity (in LBAs): 1310720 (5GiB) 00:22:15.371 Utilization (in LBAs): 1310720 (5GiB) 00:22:15.371 UUID: 1a478ff6-441c-4cc6-afe8-2b75154e8758 00:22:15.371 Thin Provisioning: Not Supported 00:22:15.371 Per-NS Atomic Units: Yes 00:22:15.372 Atomic Boundary Size (Normal): 0 00:22:15.372 Atomic Boundary Size (PFail): 0 00:22:15.372 Atomic Boundary Offset: 0 00:22:15.372 NGUID/EUI64 Never Reused: No 00:22:15.372 ANA group ID: 1 00:22:15.372 Namespace Write Protected: No 00:22:15.372 Number of LBA Formats: 1 00:22:15.372 Current LBA Format: LBA Format #00 00:22:15.372 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:22:15.372 00:22:15.372 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:15.372 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.372 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:15.372 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.372 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:15.372 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.372 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.629 rmmod nvme_tcp 00:22:15.629 rmmod nvme_fabrics 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:15.629 18:47:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:16.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:16.560 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:16.560 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:16.560 00:22:16.560 real 0m3.050s 00:22:16.560 user 0m1.017s 00:22:16.560 sys 0m1.581s 00:22:16.560 18:47:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.560 18:47:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.560 ************************************ 00:22:16.560 END TEST nvmf_identify_kernel_target 00:22:16.560 ************************************ 00:22:16.560 18:47:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:16.560 18:47:51 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:16.560 18:47:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.560 18:47:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.560 18:47:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.560 ************************************ 00:22:16.560 START TEST nvmf_auth_host 00:22:16.560 ************************************ 00:22:16.560 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:16.817 * Looking for test storage... 00:22:16.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:16.817 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.817 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:16.817 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.817 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:16.818 Cannot find device "nvmf_tgt_br" 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.818 Cannot find device "nvmf_tgt_br2" 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:16.818 Cannot find device "nvmf_tgt_br" 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:16.818 Cannot find device "nvmf_tgt_br2" 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:22:16.818 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:17.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:22:17.076 00:22:17.076 --- 10.0.0.2 ping statistics --- 00:22:17.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.076 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:17.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:17.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:22:17.076 00:22:17.076 --- 10.0.0.3 ping statistics --- 00:22:17.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.076 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:17.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:22:17.076 00:22:17.076 --- 10.0.0.1 ping statistics --- 00:22:17.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.076 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=92046 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 92046 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 92046 ']' 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.076 18:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5b0d1d28538b4f738ab4d5ef672ada9 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:18.447 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.qAl 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5b0d1d28538b4f738ab4d5ef672ada9 0 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5b0d1d28538b4f738ab4d5ef672ada9 0 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5b0d1d28538b4f738ab4d5ef672ada9 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.qAl 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.qAl 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.qAl 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a65e1683ee8eae8a25c3ff19da2c513628111e22977698929f316f9efd7cbed 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Dxv 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a65e1683ee8eae8a25c3ff19da2c513628111e22977698929f316f9efd7cbed 3 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a65e1683ee8eae8a25c3ff19da2c513628111e22977698929f316f9efd7cbed 3 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a65e1683ee8eae8a25c3ff19da2c513628111e22977698929f316f9efd7cbed 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Dxv 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Dxv 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Dxv 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=80d905d2864f0c44f448536c71870a26d72b0546e5aa7334 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Pkf 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 80d905d2864f0c44f448536c71870a26d72b0546e5aa7334 0 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 80d905d2864f0c44f448536c71870a26d72b0546e5aa7334 0 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=80d905d2864f0c44f448536c71870a26d72b0546e5aa7334 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Pkf 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Pkf 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Pkf 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9ff1d6680beaf814e973c5b07cad8c6d030e07d7fb638e2b 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FdM 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9ff1d6680beaf814e973c5b07cad8c6d030e07d7fb638e2b 2 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9ff1d6680beaf814e973c5b07cad8c6d030e07d7fb638e2b 2 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9ff1d6680beaf814e973c5b07cad8c6d030e07d7fb638e2b 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FdM 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FdM 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.FdM 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=59485151ba0563352c914488c407175e 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hPf 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 59485151ba0563352c914488c407175e 1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 59485151ba0563352c914488c407175e 1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=59485151ba0563352c914488c407175e 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:18.448 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hPf 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hPf 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hPf 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d31619fe0d3d2178cc92e2d0acdf7358 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DHz 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d31619fe0d3d2178cc92e2d0acdf7358 1 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d31619fe0d3d2178cc92e2d0acdf7358 1 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d31619fe0d3d2178cc92e2d0acdf7358 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:18.706 18:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DHz 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DHz 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.DHz 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8b961cb2117d6e9d8de4b64c83ca95a11ad2c92494374bd 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.njT 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8b961cb2117d6e9d8de4b64c83ca95a11ad2c92494374bd 2 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8b961cb2117d6e9d8de4b64c83ca95a11ad2c92494374bd 2 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8b961cb2117d6e9d8de4b64c83ca95a11ad2c92494374bd 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.njT 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.njT 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.njT 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36769a90bdb5711db393fa003093b50b 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5UU 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36769a90bdb5711db393fa003093b50b 0 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36769a90bdb5711db393fa003093b50b 0 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36769a90bdb5711db393fa003093b50b 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5UU 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5UU 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5UU 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:18.706 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=106e0af7973759295f783703c9fb30c89f969e2d736a764d9b9c960a07e7e91f 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lgv 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 106e0af7973759295f783703c9fb30c89f969e2d736a764d9b9c960a07e7e91f 3 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 106e0af7973759295f783703c9fb30c89f969e2d736a764d9b9c960a07e7e91f 3 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=106e0af7973759295f783703c9fb30c89f969e2d736a764d9b9c960a07e7e91f 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:18.707 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lgv 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lgv 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lgv 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92046 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 92046 ']' 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.963 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qAl 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Dxv ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Dxv 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Pkf 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.FdM ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FdM 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hPf 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.DHz ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DHz 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.njT 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5UU ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5UU 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lgv 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:19.221 18:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:19.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:19.798 Waiting for block devices as requested 00:22:19.798 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:20.071 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:20.635 18:47:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:20.635 No valid GPT data, bailing 00:22:20.635 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:20.635 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:20.635 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:20.635 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:20.635 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:20.635 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:20.636 No valid GPT data, bailing 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:20.636 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:20.893 No valid GPT data, bailing 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:20.893 No valid GPT data, bailing 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -a 10.0.0.1 -t tcp -s 4420 00:22:20.893 00:22:20.893 Discovery Log Number of Records 2, Generation counter 2 00:22:20.893 =====Discovery Log Entry 0====== 00:22:20.893 trtype: tcp 00:22:20.893 adrfam: ipv4 00:22:20.893 subtype: current discovery subsystem 00:22:20.893 treq: not specified, sq flow control disable supported 00:22:20.893 portid: 1 00:22:20.893 trsvcid: 4420 00:22:20.893 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:20.893 traddr: 10.0.0.1 00:22:20.893 eflags: none 00:22:20.893 sectype: none 00:22:20.893 =====Discovery Log Entry 1====== 00:22:20.893 trtype: tcp 00:22:20.893 adrfam: ipv4 00:22:20.893 subtype: nvme subsystem 00:22:20.893 treq: not specified, sq flow control disable supported 00:22:20.893 portid: 1 00:22:20.893 trsvcid: 4420 00:22:20.893 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:20.893 traddr: 10.0.0.1 00:22:20.893 eflags: none 00:22:20.893 sectype: none 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.893 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.150 nvme0n1 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:21.150 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.151 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.408 nvme0n1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.408 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.666 nvme0n1 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.666 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.667 18:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.667 nvme0n1 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.667 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 nvme0n1 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 nvme0n1 00:22:21.925 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.182 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.439 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 nvme0n1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.440 18:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.698 nvme0n1 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.698 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.956 nvme0n1 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.956 nvme0n1 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.956 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.214 nvme0n1 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.214 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.215 18:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.782 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.042 nvme0n1 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.042 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.301 nvme0n1 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.301 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.560 nvme0n1 00:22:24.560 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.560 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.560 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.560 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.560 18:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.560 18:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.560 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.819 nvme0n1 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.819 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 nvme0n1 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.337 18:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.270 nvme0n1 00:22:27.270 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.529 18:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.787 nvme0n1 00:22:27.787 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.787 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.787 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.787 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.787 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.045 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.045 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.045 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.045 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.045 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.045 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.045 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.046 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.305 nvme0n1 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.305 18:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.871 nvme0n1 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.871 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.129 nvme0n1 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.129 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.387 18:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.954 nvme0n1 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.954 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.520 nvme0n1 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.520 18:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 nvme0n1 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:31.455 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:31.456 18:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:31.456 18:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.456 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.456 18:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.022 nvme0n1 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.022 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.023 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.592 nvme0n1 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.592 18:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.592 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.851 nvme0n1 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:32.851 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.852 nvme0n1 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.852 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.111 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 nvme0n1 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.112 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.372 nvme0n1 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.372 nvme0n1 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.372 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.630 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.631 18:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 nvme0n1 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.631 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.890 nvme0n1 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.890 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.149 nvme0n1 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.149 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.408 nvme0n1 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.408 nvme0n1 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.408 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.667 18:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.667 nvme0n1 00:22:34.667 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.667 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.667 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.668 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.668 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.927 nvme0n1 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.927 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.186 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.447 nvme0n1 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.447 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.707 nvme0n1 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.707 18:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.707 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.965 nvme0n1 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.965 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.223 nvme0n1 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.223 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.482 18:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.741 nvme0n1 00:22:36.741 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.741 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.741 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.741 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.741 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.741 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.742 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.309 nvme0n1 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.309 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 nvme0n1 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.569 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.570 18:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.570 18:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:37.570 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.570 18:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.135 nvme0n1 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.135 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.136 18:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.699 nvme0n1 00:22:38.699 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.699 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.699 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.699 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.699 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.699 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.955 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.518 nvme0n1 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.518 18:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.082 nvme0n1 00:22:40.082 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.083 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.341 18:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.907 nvme0n1 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.907 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.908 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.473 nvme0n1 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.473 18:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.732 nvme0n1 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.732 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.733 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.733 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.733 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.733 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.733 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.733 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.733 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.991 nvme0n1 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.991 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.992 nvme0n1 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.992 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.251 nvme0n1 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.251 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.510 nvme0n1 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.510 nvme0n1 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.510 18:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.769 18:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.769 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.770 nvme0n1 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.770 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.029 nvme0n1 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:43.029 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.030 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.289 nvme0n1 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.289 nvme0n1 00:22:43.289 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.548 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.549 18:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.549 nvme0n1 00:22:43.549 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.807 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.808 nvme0n1 00:22:43.808 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.067 nvme0n1 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.067 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.326 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.326 nvme0n1 00:22:44.327 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.586 18:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.586 nvme0n1 00:22:44.586 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.845 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.846 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.846 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.846 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.846 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.846 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.846 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.846 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.104 nvme0n1 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.104 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.392 nvme0n1 00:22:45.392 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.392 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.392 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.392 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.392 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.653 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.654 18:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.918 nvme0n1 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.918 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.485 nvme0n1 00:22:46.485 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.486 18:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.744 nvme0n1 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjViMGQxZDI4NTM4YjRmNzM4YWI0ZDVlZjY3MmFkYTmMsd4C: 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: ]] 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE2NWUxNjgzZWU4ZWFlOGEyNWMzZmYxOWRhMmM1MTM2MjgxMTFlMjI5Nzc2OTg5MjlmMzE2ZjllZmQ3Y2JlZFv7Aus=: 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.744 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.745 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.311 nvme0n1 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.311 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.312 18:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.878 nvme0n1 00:22:47.878 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.878 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.878 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.878 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.878 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.878 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:48.136 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTk0ODUxNTFiYTA1NjMzNTJjOTE0NDg4YzQwNzE3NWV4Quea: 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: ]] 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxNjE5ZmUwZDNkMjE3OGNjOTJlMmQwYWNkZjczNTh6Gwp3: 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.137 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.704 nvme0n1 00:22:48.704 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.704 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.704 18:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.704 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.704 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.704 18:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThiOTYxY2IyMTE3ZDZlOWQ4ZGU0YjY0YzgzY2E5NWExMWFkMmM5MjQ5NDM3NGJk7IXSYg==: 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY3NjlhOTBiZGI1NzExZGIzOTNmYTAwMzA5M2I1MGLJ/ybI: 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.704 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.269 nvme0n1 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTA2ZTBhZjc5NzM3NTkyOTVmNzgzNzAzYzlmYjMwYzg5Zjk2OWUyZDczNmE3NjRkOWI5Yzk2MGEwN2U3ZTkxZl0HfuA=: 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.269 18:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.835 nvme0n1 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBkOTA1ZDI4NjRmMGM0NGY0NDg1MzZjNzE4NzBhMjZkNzJiMDU0NmU1YWE3MzM0qvCT5A==: 00:22:49.835 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: ]] 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWZmMWQ2NjgwYmVhZjgxNGU5NzNjNWIwN2NhZDhjNmQwMzBlMDdkN2ZiNjM4ZTJiL2mpCA==: 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:49.836 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.095 2024/07/15 18:48:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:50.095 request: 00:22:50.095 { 00:22:50.095 "method": "bdev_nvme_attach_controller", 00:22:50.095 "params": { 00:22:50.095 "name": "nvme0", 00:22:50.095 "trtype": "tcp", 00:22:50.095 "traddr": "10.0.0.1", 00:22:50.095 "adrfam": "ipv4", 00:22:50.095 "trsvcid": "4420", 00:22:50.095 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:50.095 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:50.095 "prchk_reftag": false, 00:22:50.095 "prchk_guard": false, 00:22:50.095 "hdgst": false, 00:22:50.095 "ddgst": false 00:22:50.095 } 00:22:50.095 } 00:22:50.095 Got JSON-RPC error response 00:22:50.095 GoRPCClient: error on JSON-RPC call 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.095 2024/07/15 18:48:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:50.095 request: 00:22:50.095 { 00:22:50.095 "method": "bdev_nvme_attach_controller", 00:22:50.095 "params": { 00:22:50.095 "name": "nvme0", 00:22:50.095 "trtype": "tcp", 00:22:50.095 "traddr": "10.0.0.1", 00:22:50.095 "adrfam": "ipv4", 00:22:50.095 "trsvcid": "4420", 00:22:50.095 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:50.095 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:50.095 "prchk_reftag": false, 00:22:50.095 "prchk_guard": false, 00:22:50.095 "hdgst": false, 00:22:50.095 "ddgst": false, 00:22:50.095 "dhchap_key": "key2" 00:22:50.095 } 00:22:50.095 } 00:22:50.095 Got JSON-RPC error response 00:22:50.095 GoRPCClient: error on JSON-RPC call 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.095 2024/07/15 18:48:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:50.095 request: 00:22:50.095 { 00:22:50.095 "method": "bdev_nvme_attach_controller", 00:22:50.095 "params": { 00:22:50.095 "name": "nvme0", 00:22:50.095 "trtype": "tcp", 00:22:50.095 "traddr": "10.0.0.1", 00:22:50.095 "adrfam": "ipv4", 00:22:50.095 "trsvcid": "4420", 00:22:50.095 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:50.095 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:50.095 "prchk_reftag": false, 00:22:50.095 "prchk_guard": false, 00:22:50.095 "hdgst": false, 00:22:50.095 "ddgst": false, 00:22:50.095 "dhchap_key": "key1", 00:22:50.095 "dhchap_ctrlr_key": "ckey2" 00:22:50.095 } 00:22:50.095 } 00:22:50.095 Got JSON-RPC error response 00:22:50.095 GoRPCClient: error on JSON-RPC call 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.095 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.095 rmmod nvme_tcp 00:22:50.095 rmmod nvme_fabrics 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 92046 ']' 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 92046 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 92046 ']' 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 92046 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92046 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.354 killing process with pid 92046 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92046' 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 92046 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 92046 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.354 18:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:50.612 18:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:51.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:51.437 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:51.437 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:51.437 18:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.qAl /tmp/spdk.key-null.Pkf /tmp/spdk.key-sha256.hPf /tmp/spdk.key-sha384.njT /tmp/spdk.key-sha512.lgv /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:51.437 18:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:52.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:52.086 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:52.086 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:52.086 00:22:52.086 real 0m35.327s 00:22:52.086 user 0m31.519s 00:22:52.086 sys 0m4.514s 00:22:52.086 18:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:52.086 ************************************ 00:22:52.086 END TEST nvmf_auth_host 00:22:52.086 ************************************ 00:22:52.086 18:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.086 18:48:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:52.086 18:48:26 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:22:52.086 18:48:26 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:52.086 18:48:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:52.086 18:48:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:52.086 18:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:52.086 ************************************ 00:22:52.086 START TEST nvmf_digest 00:22:52.086 ************************************ 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:52.086 * Looking for test storage... 00:22:52.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.086 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.087 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:52.345 Cannot find device "nvmf_tgt_br" 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.345 Cannot find device "nvmf_tgt_br2" 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:52.345 Cannot find device "nvmf_tgt_br" 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:52.345 Cannot find device "nvmf_tgt_br2" 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:52.345 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:52.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:52.602 00:22:52.602 --- 10.0.0.2 ping statistics --- 00:22:52.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.602 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:52.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:22:52.602 00:22:52.602 --- 10.0.0.3 ping statistics --- 00:22:52.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.602 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:22:52.602 00:22:52.602 --- 10.0.0.1 ping statistics --- 00:22:52.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.602 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.602 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:52.603 ************************************ 00:22:52.603 START TEST nvmf_digest_clean 00:22:52.603 ************************************ 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93636 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93636 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93636 ']' 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.603 18:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:52.603 [2024-07-15 18:48:27.041044] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:52.603 [2024-07-15 18:48:27.041154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.859 [2024-07-15 18:48:27.186713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.859 [2024-07-15 18:48:27.323509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.859 [2024-07-15 18:48:27.323561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.860 [2024-07-15 18:48:27.323589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.860 [2024-07-15 18:48:27.323598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.860 [2024-07-15 18:48:27.323606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.860 [2024-07-15 18:48:27.323635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:53.793 null0 00:22:53.793 [2024-07-15 18:48:28.219635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.793 [2024-07-15 18:48:28.243750] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93686 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93686 /var/tmp/bperf.sock 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93686 ']' 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.793 18:48:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:54.051 [2024-07-15 18:48:28.296021] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:54.051 [2024-07-15 18:48:28.296110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93686 ] 00:22:54.051 [2024-07-15 18:48:28.435382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.310 [2024-07-15 18:48:28.553565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.879 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.879 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:22:54.879 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:54.879 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:54.879 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:55.139 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:55.139 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:55.397 nvme0n1 00:22:55.397 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:55.397 18:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:55.655 Running I/O for 2 seconds... 00:22:57.555 00:22:57.555 Latency(us) 00:22:57.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.555 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:57.555 nvme0n1 : 2.00 22302.01 87.12 0.00 0.00 5733.88 3105.16 15416.56 00:22:57.555 =================================================================================================================== 00:22:57.555 Total : 22302.01 87.12 0.00 0.00 5733.88 3105.16 15416.56 00:22:57.555 0 00:22:57.555 18:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:57.555 18:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:57.555 18:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:57.555 18:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:57.555 18:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:57.555 | select(.opcode=="crc32c") 00:22:57.555 | "\(.module_name) \(.executed)"' 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93686 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93686 ']' 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93686 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93686 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:57.813 killing process with pid 93686 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:57.813 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93686' 00:22:57.814 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93686 00:22:57.814 Received shutdown signal, test time was about 2.000000 seconds 00:22:57.814 00:22:57.814 Latency(us) 00:22:57.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.814 =================================================================================================================== 00:22:57.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.814 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93686 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93772 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93772 /var/tmp/bperf.sock 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93772 ']' 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.071 18:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:58.071 [2024-07-15 18:48:32.449336] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:58.071 [2024-07-15 18:48:32.449424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93772 ] 00:22:58.071 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:58.071 Zero copy mechanism will not be used. 00:22:58.329 [2024-07-15 18:48:32.583882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.329 [2024-07-15 18:48:32.686655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.261 18:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.261 18:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:22:59.261 18:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:59.261 18:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:59.261 18:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:59.519 18:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.519 18:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.776 nvme0n1 00:22:59.776 18:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:59.776 18:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:00.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:00.033 Zero copy mechanism will not be used. 00:23:00.033 Running I/O for 2 seconds... 00:23:01.935 00:23:01.935 Latency(us) 00:23:01.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.935 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:01.935 nvme0n1 : 2.00 8928.16 1116.02 0.00 0.00 1788.91 553.94 6678.43 00:23:01.935 =================================================================================================================== 00:23:01.935 Total : 8928.16 1116.02 0.00 0.00 1788.91 553.94 6678.43 00:23:01.935 0 00:23:01.935 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:01.935 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:01.935 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:01.935 | select(.opcode=="crc32c") 00:23:01.935 | "\(.module_name) \(.executed)"' 00:23:01.935 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:01.935 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93772 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93772 ']' 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93772 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93772 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:02.195 killing process with pid 93772 00:23:02.195 Received shutdown signal, test time was about 2.000000 seconds 00:23:02.195 00:23:02.195 Latency(us) 00:23:02.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.195 =================================================================================================================== 00:23:02.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.195 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93772' 00:23:02.196 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93772 00:23:02.196 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93772 00:23:02.454 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:02.454 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:02.454 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:02.454 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93861 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93861 /var/tmp/bperf.sock 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93861 ']' 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.455 18:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:02.455 [2024-07-15 18:48:36.820551] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:02.455 [2024-07-15 18:48:36.820654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93861 ] 00:23:02.712 [2024-07-15 18:48:36.957768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.712 [2024-07-15 18:48:37.059184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.278 18:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.278 18:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:03.278 18:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:03.278 18:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:03.278 18:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:03.845 18:48:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.845 18:48:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.845 nvme0n1 00:23:03.845 18:48:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:03.845 18:48:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:04.103 Running I/O for 2 seconds... 00:23:06.005 00:23:06.005 Latency(us) 00:23:06.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.005 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:06.005 nvme0n1 : 2.00 25296.53 98.81 0.00 0.00 5054.81 2106.51 18599.74 00:23:06.005 =================================================================================================================== 00:23:06.005 Total : 25296.53 98.81 0.00 0.00 5054.81 2106.51 18599.74 00:23:06.005 0 00:23:06.005 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:06.005 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:06.005 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:06.005 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:06.005 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:06.005 | select(.opcode=="crc32c") 00:23:06.005 | "\(.module_name) \(.executed)"' 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93861 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93861 ']' 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93861 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93861 00:23:06.263 killing process with pid 93861 00:23:06.263 Received shutdown signal, test time was about 2.000000 seconds 00:23:06.263 00:23:06.263 Latency(us) 00:23:06.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.263 =================================================================================================================== 00:23:06.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93861' 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93861 00:23:06.263 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93861 00:23:06.520 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:06.520 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:06.520 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:06.520 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:06.520 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:06.520 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:06.520 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93950 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93950 /var/tmp/bperf.sock 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93950 ']' 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:06.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.521 18:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:06.521 [2024-07-15 18:48:40.953043] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:06.521 [2024-07-15 18:48:40.953278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93950 ] 00:23:06.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:06.521 Zero copy mechanism will not be used. 00:23:06.778 [2024-07-15 18:48:41.091735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.778 [2024-07-15 18:48:41.197285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.714 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.714 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:07.714 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:07.714 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:07.714 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:07.972 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:07.972 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.230 nvme0n1 00:23:08.489 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:08.489 18:48:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:08.489 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:08.489 Zero copy mechanism will not be used. 00:23:08.489 Running I/O for 2 seconds... 00:23:10.386 00:23:10.386 Latency(us) 00:23:10.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.386 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:10.386 nvme0n1 : 2.00 8563.75 1070.47 0.00 0.00 1864.67 1373.14 3557.67 00:23:10.386 =================================================================================================================== 00:23:10.386 Total : 8563.75 1070.47 0.00 0.00 1864.67 1373.14 3557.67 00:23:10.386 0 00:23:10.386 18:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:10.386 18:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:10.386 18:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:10.386 18:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:10.386 | select(.opcode=="crc32c") 00:23:10.386 | "\(.module_name) \(.executed)"' 00:23:10.386 18:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93950 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93950 ']' 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93950 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93950 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:10.644 killing process with pid 93950 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93950' 00:23:10.644 Received shutdown signal, test time was about 2.000000 seconds 00:23:10.644 00:23:10.644 Latency(us) 00:23:10.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.644 =================================================================================================================== 00:23:10.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93950 00:23:10.644 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93950 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93636 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93636 ']' 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93636 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93636 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.902 killing process with pid 93636 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93636' 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93636 00:23:10.902 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93636 00:23:11.160 00:23:11.160 real 0m18.501s 00:23:11.160 user 0m34.568s 00:23:11.160 sys 0m5.234s 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:11.160 ************************************ 00:23:11.160 END TEST nvmf_digest_clean 00:23:11.160 ************************************ 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:11.160 ************************************ 00:23:11.160 START TEST nvmf_digest_error 00:23:11.160 ************************************ 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=94065 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 94065 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94065 ']' 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:11.160 18:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.160 [2024-07-15 18:48:45.608936] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:11.160 [2024-07-15 18:48:45.609055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.426 [2024-07-15 18:48:45.753505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.426 [2024-07-15 18:48:45.865173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.426 [2024-07-15 18:48:45.865229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.426 [2024-07-15 18:48:45.865244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.426 [2024-07-15 18:48:45.865257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.426 [2024-07-15 18:48:45.865268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.426 [2024-07-15 18:48:45.865308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.993 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.993 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:11.993 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.993 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.993 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:12.314 [2024-07-15 18:48:46.517891] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:12.314 null0 00:23:12.314 [2024-07-15 18:48:46.616714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.314 [2024-07-15 18:48:46.640793] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94109 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94109 /var/tmp/bperf.sock 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94109 ']' 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.314 18:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:12.314 [2024-07-15 18:48:46.696489] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:12.314 [2024-07-15 18:48:46.696595] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94109 ] 00:23:12.582 [2024-07-15 18:48:46.829580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.582 [2024-07-15 18:48:46.941242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.511 18:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.767 nvme0n1 00:23:13.767 18:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:13.767 18:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.767 18:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:13.767 18:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.767 18:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:13.767 18:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:14.024 Running I/O for 2 seconds... 00:23:14.024 [2024-07-15 18:48:48.334236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.024 [2024-07-15 18:48:48.334296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.024 [2024-07-15 18:48:48.334312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.024 [2024-07-15 18:48:48.345111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.024 [2024-07-15 18:48:48.345159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.024 [2024-07-15 18:48:48.345174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.024 [2024-07-15 18:48:48.357161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.024 [2024-07-15 18:48:48.357205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.024 [2024-07-15 18:48:48.357219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.370173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.370218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.370232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.382746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.382791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.382805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.395255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.395313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.395328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.408300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.408344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.408358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.420590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.420633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.420647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.431588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.431646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.431660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.446075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.446118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.446132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.455757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.455796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.455809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.469029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.469067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.469080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.481989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.482030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.482044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.025 [2024-07-15 18:48:48.495225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.025 [2024-07-15 18:48:48.495263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.025 [2024-07-15 18:48:48.495276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.507587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.507627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.507641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.520777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.520817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.520830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.531882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.531923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.531936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.545684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.545727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.545741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.559895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.559937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.559973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.571169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.571211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.571225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.584940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.584995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.585010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.597639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.597688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.597701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.611757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.611802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.611816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.625074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.625116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.625130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.637033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.637073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.637085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.648946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.649006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.649020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.660627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.660669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.660683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.674178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.674221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.674235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.686631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.686673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.686690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.700138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.700180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.700194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.713200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.713241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.713256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.726119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.726160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.726174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.739141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.739182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.739196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.751567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.751607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.751621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.282 [2024-07-15 18:48:48.763131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.282 [2024-07-15 18:48:48.763171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.282 [2024-07-15 18:48:48.763184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.776612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.776654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.776667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.787239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.787279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.787292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.801006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.801046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.801061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.814358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.814400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.814415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.827084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.827130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.827145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.840495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.840546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.840560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.852301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.852348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.852362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.865893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.865943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.865971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.878068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.878121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.878136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.891162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.891215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.891230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.904881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.904934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.904960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.916782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.916830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.916861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.929709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.929758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.929772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.941786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.941832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.941846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.956366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.956414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.956428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.969828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.969876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.969891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.983298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.983344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.983358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:48.996853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:48.996901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:48.996915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:49.010044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:49.010094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:49.010109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.540 [2024-07-15 18:48:49.020978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.540 [2024-07-15 18:48:49.021024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.540 [2024-07-15 18:48:49.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.033371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.033420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.033434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.046296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.046340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.046355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.056860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.056901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.056914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.070952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.071002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.071015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.083209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.083249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.083263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.097032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.097073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.097086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.107392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.107435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.107448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.120364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.120407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.120419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.131203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.131239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.131252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.145412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.145450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.145486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.157385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.157423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.157436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.166967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.167002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.167015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.179784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.179822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.191889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.191927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.191940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.204423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.204461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.204472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.214257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.214292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.214305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.226813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.226850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.226863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.239544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.239582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.239596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.251346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.251383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.251396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.262384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.262423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.262436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.798 [2024-07-15 18:48:49.274534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:14.798 [2024-07-15 18:48:49.274582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.798 [2024-07-15 18:48:49.274594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.288274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.288315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.288327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.300651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.300693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.300706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.312302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.312344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.312358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.324251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.324298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.324312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.335639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.335684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.335698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.348218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.348265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.348278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.360899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.360956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.360970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.375091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.375139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.375153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.388196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.388250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.388265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.399131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.399200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.412717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.412770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.412784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.426082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.426136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.426151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.439313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.439373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.439388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.452199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.452259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.452274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.465842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.056 [2024-07-15 18:48:49.465901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.056 [2024-07-15 18:48:49.465916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.056 [2024-07-15 18:48:49.477225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.057 [2024-07-15 18:48:49.477288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.057 [2024-07-15 18:48:49.477305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.057 [2024-07-15 18:48:49.489841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.057 [2024-07-15 18:48:49.489894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.057 [2024-07-15 18:48:49.489909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.057 [2024-07-15 18:48:49.504757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.057 [2024-07-15 18:48:49.504811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.057 [2024-07-15 18:48:49.504827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.057 [2024-07-15 18:48:49.518774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.057 [2024-07-15 18:48:49.518834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.057 [2024-07-15 18:48:49.518850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.057 [2024-07-15 18:48:49.531462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.057 [2024-07-15 18:48:49.531520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.057 [2024-07-15 18:48:49.531535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.546002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.546060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.546075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.556030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.556086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.556101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.571768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.571830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.571844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.586092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.586155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.586170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.600769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.600830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.600845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.614338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.614398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.614413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.626175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.626231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.626246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.640602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.640657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.640672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.651942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.652006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.652020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.666724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.666777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.666792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.678814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.678866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.678881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.690642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.690696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.690711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.704942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.705007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.705022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.718398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.718454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.718469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.731614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.731668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.731683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.743836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.336 [2024-07-15 18:48:49.743890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.336 [2024-07-15 18:48:49.743904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.336 [2024-07-15 18:48:49.756567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.337 [2024-07-15 18:48:49.756618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.337 [2024-07-15 18:48:49.756632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.337 [2024-07-15 18:48:49.769490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.337 [2024-07-15 18:48:49.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.337 [2024-07-15 18:48:49.769557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.337 [2024-07-15 18:48:49.783472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.337 [2024-07-15 18:48:49.783540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.337 [2024-07-15 18:48:49.783555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.337 [2024-07-15 18:48:49.797155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.337 [2024-07-15 18:48:49.797206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.337 [2024-07-15 18:48:49.797220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.337 [2024-07-15 18:48:49.809647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.337 [2024-07-15 18:48:49.809697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.337 [2024-07-15 18:48:49.809713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.593 [2024-07-15 18:48:49.822693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.593 [2024-07-15 18:48:49.822747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.593 [2024-07-15 18:48:49.822762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.593 [2024-07-15 18:48:49.835295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.593 [2024-07-15 18:48:49.835348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.593 [2024-07-15 18:48:49.835363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.593 [2024-07-15 18:48:49.849371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.593 [2024-07-15 18:48:49.849425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.593 [2024-07-15 18:48:49.849440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.593 [2024-07-15 18:48:49.861979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.593 [2024-07-15 18:48:49.862030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.593 [2024-07-15 18:48:49.862044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.593 [2024-07-15 18:48:49.874198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.593 [2024-07-15 18:48:49.874248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.593 [2024-07-15 18:48:49.874263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.593 [2024-07-15 18:48:49.887031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.593 [2024-07-15 18:48:49.887081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.887095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.900666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.900717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.900731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.914534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.914588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.914604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.927451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.927505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.927519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.941300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.941358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.941373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.955656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.955716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.955731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.967581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.967648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.967662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.978592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.978649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.978664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:49.991649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:49.991703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:49.991717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:50.005073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:50.005129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:50.005144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:50.017329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:50.017390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:50.017405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:50.031340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:50.031401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:50.031416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:50.043605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:50.043662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:50.043676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:50.055719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:50.055791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:50.055805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.594 [2024-07-15 18:48:50.068727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.594 [2024-07-15 18:48:50.068787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.594 [2024-07-15 18:48:50.068803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.081837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.081898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.081914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.095091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.095152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.095168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.107255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.107307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.107322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.118539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.118591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.118606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.131260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.131310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.131324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.145499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.145555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.145570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.156587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.156638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.156652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.169999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.170064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.170079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.180732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.180778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.180809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.194389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.194435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.194449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.207450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.207496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.207510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.219652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.219700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.233149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.233195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.233210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.245989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.246036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.246051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.259435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.259481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.259495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.273111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.273156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.273170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.286245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.286291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.286305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.299395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.299444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.299458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 [2024-07-15 18:48:50.310688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13913e0) 00:23:15.851 [2024-07-15 18:48:50.310752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.851 [2024-07-15 18:48:50.310767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.851 00:23:15.851 Latency(us) 00:23:15.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.851 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:15.851 nvme0n1 : 2.00 19946.60 77.92 0.00 0.00 6409.11 3417.23 18350.08 00:23:15.851 =================================================================================================================== 00:23:15.851 Total : 19946.60 77.92 0.00 0.00 6409.11 3417.23 18350.08 00:23:15.851 0 00:23:16.108 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:16.108 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:16.108 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:16.108 | .driver_specific 00:23:16.108 | .nvme_error 00:23:16.108 | .status_code 00:23:16.108 | .command_transient_transport_error' 00:23:16.108 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94109 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94109 ']' 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94109 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94109 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:16.366 killing process with pid 94109 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94109' 00:23:16.366 Received shutdown signal, test time was about 2.000000 seconds 00:23:16.366 00:23:16.366 Latency(us) 00:23:16.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.366 =================================================================================================================== 00:23:16.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94109 00:23:16.366 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94109 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94200 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94200 /var/tmp/bperf.sock 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94200 ']' 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.624 18:48:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:16.624 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:16.624 Zero copy mechanism will not be used. 00:23:16.624 [2024-07-15 18:48:50.958074] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:16.624 [2024-07-15 18:48:50.958164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94200 ] 00:23:16.624 [2024-07-15 18:48:51.096663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.882 [2024-07-15 18:48:51.203999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.445 18:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.445 18:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:17.445 18:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:17.445 18:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:17.710 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:17.710 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.710 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:17.710 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.710 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.710 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.967 nvme0n1 00:23:17.967 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:17.967 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.967 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:17.967 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.967 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:17.967 18:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:18.226 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:18.226 Zero copy mechanism will not be used. 00:23:18.226 Running I/O for 2 seconds... 00:23:18.226 [2024-07-15 18:48:52.543661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.543712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 18:48:52.543725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.226 [2024-07-15 18:48:52.547822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.547860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 18:48:52.547871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.226 [2024-07-15 18:48:52.552005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.552042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 18:48:52.552053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.226 [2024-07-15 18:48:52.554589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.554624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 18:48:52.554635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.226 [2024-07-15 18:48:52.558006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.558039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 18:48:52.558055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.226 [2024-07-15 18:48:52.561726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.561765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 18:48:52.561777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.226 [2024-07-15 18:48:52.564911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.564963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 18:48:52.564977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.226 [2024-07-15 18:48:52.568048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.226 [2024-07-15 18:48:52.568083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.568094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.571804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.571843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.571855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.576059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.576096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.576109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.578787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.578834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.578845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.582508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.582544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.582557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.585576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.585612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.585624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.588531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.588566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.588577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.591945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.591990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.592018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.595483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.595521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.595532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.598430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.598469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.598481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.601874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.601910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.601937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.604714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.604749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.608648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.608685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.608712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.612127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.612164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.612176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.615773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.615808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.615819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.618002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.618034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.618045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.621488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.621522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.621533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.624906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.624955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.624967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.627249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.627283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.627294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.630785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.630822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.630833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.634488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.634524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.634535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.638060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.638096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.638107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.640736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.640769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.640796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.643897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.643933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.643959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.647150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.647184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.647195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.649822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.649857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.649868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.653289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.653324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.653335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.656880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.656914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.656926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.659599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.227 [2024-07-15 18:48:52.659634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 18:48:52.659645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.227 [2024-07-15 18:48:52.663054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.663090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.663101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.665830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.665865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.665876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.669318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.669355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.669366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.672373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.672409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.672420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.675540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.675577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.675589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.678607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.678645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.678658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.682420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.682463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.682477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.685084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.685122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.685135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.689096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.689142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.689155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.692572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.692614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.692628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.695620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.695659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.695671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.699462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.699502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.699515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.228 [2024-07-15 18:48:52.703853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.228 [2024-07-15 18:48:52.703894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 18:48:52.703906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.707849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.707889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.707902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.710641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.710680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.710692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.715226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.715279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.715292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.719701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.719741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.719753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.722987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.723023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.723036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.725888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.725926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.725939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.730032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.730067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.730096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.734230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.734269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.734282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.738033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.738068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.738096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.740579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.740611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.740622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.744182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.744219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.744230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.747936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.748027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.748039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.751573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.751607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.751618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.754044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.754079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.754091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.758267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.758308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.758320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.762332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.762369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.762380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.766317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.766353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.766364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.769214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.769254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.769266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.772960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.772996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.773009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.777008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.777044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.777056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.781359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.781399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.781411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.784376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.489 [2024-07-15 18:48:52.784409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.489 [2024-07-15 18:48:52.784421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.489 [2024-07-15 18:48:52.787919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.787970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.791814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.791849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.791860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.795804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.795842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.795854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.798830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.798866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.798878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.802498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.802535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.802547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.806357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.806395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.806406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.809388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.809426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.809438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.813058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.813100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.813112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.816158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.816197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.816226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.819660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.819697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.819725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.822776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.822814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.822826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.826637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.826704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.826716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.830112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.830152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.830165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.833504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.833549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.833561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.837635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.837675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.837687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.841131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.841166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.841177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.843962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.843996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.844007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.847342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.847378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.847389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.850722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.850760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.850773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.853674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.853711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.853724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.857379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.857417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.857428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.861631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.861671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.861683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.865762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.865803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.865816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.868395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.868431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.868442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.872542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.872579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.872591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.876497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.876535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.876546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.880155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.880194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.880207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.882530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.882565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.882577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.886703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.886742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.490 [2024-07-15 18:48:52.886755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.490 [2024-07-15 18:48:52.891032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.490 [2024-07-15 18:48:52.891071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.891082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.895060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.895100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.895112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.897813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.897850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.897862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.901453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.901520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.901532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.904922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.904977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.904989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.908009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.908043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.908054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.911992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.912027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.912039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.915441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.915487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.918487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.918526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.918538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.922360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.922398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.922410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.925521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.925558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.925570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.928671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.928709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.928721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.932598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.932639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.935452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.935489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.935500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.938595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.938646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.938658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.941886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.941928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.941941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.945554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.945596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.945609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.950072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.950114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.950128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.953186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.953223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.953235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.956753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.956792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.956805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.961162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.961203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.961215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.965598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.965641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.965654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.491 [2024-07-15 18:48:52.969394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.491 [2024-07-15 18:48:52.969434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.491 [2024-07-15 18:48:52.969447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.971906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.971957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.971970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.976218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.976258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.976270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.980668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.980709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.980722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.984594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.984633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.984646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.987460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.987498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.987510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.991160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.991201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.991214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.994669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.994710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.994722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:52.998013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:52.998051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:52.998063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:53.001658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:53.001699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:53.001712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:53.005160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:53.005200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:53.005214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:53.008781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:53.008821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:53.008834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:53.012333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:53.012371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:53.012384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:53.016091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.751 [2024-07-15 18:48:53.016131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.751 [2024-07-15 18:48:53.016144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.751 [2024-07-15 18:48:53.020175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.020215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.020227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.022756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.022794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.022806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.026567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.026608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.026620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.030707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.030746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.030759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.034786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.034823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.034851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.037804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.037841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.037854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.041441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.041488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.041500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.045016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.045049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.045061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.047895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.047932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.047956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.051597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.051635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.051647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.055339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.055377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.055389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.059811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.059849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.059861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.062480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.062518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.062531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.066221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.066270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.070696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.070736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.070749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.074445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.074485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.074498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.077661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.077701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.077714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.081308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.081346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.081359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.084881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.084922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.084934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.087708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.087746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.087758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.091236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.091273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.091284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.094300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.094338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.094350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.097258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.097295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.097307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.101266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.101306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.101318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.104027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.104064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.104076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.107696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.107734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.107746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.111582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.111622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.111633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.115254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.115292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.115304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.118059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.752 [2024-07-15 18:48:53.118094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.752 [2024-07-15 18:48:53.118107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.752 [2024-07-15 18:48:53.122092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.122129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.122142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.125685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.125733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.125745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.128444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.128481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.128493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.131714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.131753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.131766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.135310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.135347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.135360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.137968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.138004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.138017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.141776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.141814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.141826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.145755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.145795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.145808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.148531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.148569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.148580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.151917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.151970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.151983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.155974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.156011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.156023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.159768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.159807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.159819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.163122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.163160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.163172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.166711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.166748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.166760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.169363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.169400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.169412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.173201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.173240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.173252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.176897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.176937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.176961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.180274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.180313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.180325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.183682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.183719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.183731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.187272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.187308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.187319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.190089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.190132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.190144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.193613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.193650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.193662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.196875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.196910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.196921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.200278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.200316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.200327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.203703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.203738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.203749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.206604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.206651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.206662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.210130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.210167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.210179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.214298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.214338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.214350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.218263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.218301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.753 [2024-07-15 18:48:53.218314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.753 [2024-07-15 18:48:53.220637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.753 [2024-07-15 18:48:53.220673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.754 [2024-07-15 18:48:53.220685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.754 [2024-07-15 18:48:53.224399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.754 [2024-07-15 18:48:53.224436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.754 [2024-07-15 18:48:53.224447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.754 [2024-07-15 18:48:53.228429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:18.754 [2024-07-15 18:48:53.228467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.754 [2024-07-15 18:48:53.228494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.232247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.232284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.232295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.235338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.235377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.235389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.239112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.239148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.239160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.243181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.243218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.243230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.246026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.246062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.246074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.249642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.249677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.249688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.253262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.253304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.253316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.256144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.256180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.256193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.259522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.259558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.259570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.263789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.263828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.263841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.268114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.268152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.268164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.271798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.271833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.271845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.274516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.274553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.274565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.277907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.277962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.277975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.281105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.281142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.281154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.284701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.284739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.284751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.288536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.288575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.288587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.291715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.291752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.291764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.295172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.295210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.295222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.298839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.298878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.298890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.011 [2024-07-15 18:48:53.301758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.011 [2024-07-15 18:48:53.301795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.011 [2024-07-15 18:48:53.301807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.304736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.304771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.304783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.308381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.308420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.308433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.312692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.312736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.312749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.315754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.315791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.315819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.319505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.319543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.319572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.323761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.323800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.323812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.326704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.326752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.326779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.330477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.330514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.330527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.334049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.334085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.334098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.337245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.337281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.337294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.340496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.340529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.340540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.344085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.344119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.344131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.347149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.347186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.347198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.350444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.350483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.350495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.354144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.354179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.354206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.356831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.356869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.356881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.360516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.360552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.360563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.364031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.364068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.364081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.366727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.366767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.366780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.370651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.370689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.370701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.374583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.374621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.374633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.378859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.378898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.378910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.381656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.381692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.381721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.385373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.385410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.385437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.388808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.388848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.388859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.392376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.392414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.395893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.395930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.395942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.399499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.399536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.399548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.403064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.403102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.403114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.407170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.407211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.407224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.409923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.409972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.409985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.414150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.414189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.414218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.418506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.418547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.418575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.422770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.422808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.422821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.425243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.425279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.425291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.429506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.429548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.429561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.433980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.434018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.434030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.437068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.437104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.437117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.440988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.012 [2024-07-15 18:48:53.441025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.012 [2024-07-15 18:48:53.441036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.012 [2024-07-15 18:48:53.443800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.443837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.443850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.447681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.447720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.447733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.452226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.452264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.452277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.456284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.456322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.456334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.459795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.459831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.459843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.462532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.462568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.462581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.466213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.466253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.466281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.470012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.470049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.470061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.472664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.472701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.472713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.476500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.476537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.476564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.480540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.480579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.480590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.484428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.484465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.484476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.487150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.487182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.487193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.013 [2024-07-15 18:48:53.490669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.013 [2024-07-15 18:48:53.490705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.013 [2024-07-15 18:48:53.490732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.494589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.494624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.494637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.498677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.498715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.498728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.502447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.502484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.505140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.505174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.505185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.508836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.508878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.508890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.512783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.512824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.512836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.516251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.516292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.516303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.519509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.519550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.519563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.523587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.523630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.523642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.527736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.527779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.527792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.530675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.530716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.530728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.534408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.534451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.534464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.538531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.538572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.538585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.542132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.542169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.542181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.544609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.544645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.544658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.548966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.549008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.549019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.552025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.552065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.552077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 18:48:53.555855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.271 [2024-07-15 18:48:53.555898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 18:48:53.555911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.560055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.560102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.560115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.563840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.563884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.563896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.567054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.567096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.567108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.570657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.570702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.570714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.574557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.574602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.574615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.579328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.579372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.579387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.583857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.583901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.583915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.586488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.586529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.586542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.591279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.591327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.591340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.595608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.595655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.595668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.599276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.599317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.599330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.602389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.602432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.602445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.606247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.606290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.606303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.610581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.610626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.610638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.613627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.613665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.613678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.617524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.617564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.617576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.621435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.621486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.621498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.625551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.625589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.625601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.628095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.628130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.628142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.632575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.632617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.632629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.637112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.637154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.637166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.641470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.641509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.641537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.644604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.644640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.644652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.648163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.648202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.648214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.652209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.652247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.652260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.656125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.656176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.659965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.660004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.662534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.662570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.662581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.666971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.272 [2024-07-15 18:48:53.667010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 18:48:53.667023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 18:48:53.671163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.671206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.671219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.673927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.673975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.673988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.677731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.677771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.677783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.681934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.681988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.682000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.686140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.686181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.686193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.690202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.690243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.690256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.692844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.692884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.692895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.697539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.697587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.697601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.701674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.701722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.701737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.705043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.705089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.705103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.708527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.708572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.708586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.712436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.712483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.712496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.716617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.716668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.716683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.719787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.719838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.719851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.723618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.723662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.723692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.727427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.727469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.727499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.730602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.730643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.730672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.734581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.734622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.734651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.738480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.738523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.738553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.741718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.741759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.741772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.745450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.745506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.745519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 18:48:53.748669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.273 [2024-07-15 18:48:53.748710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 18:48:53.748723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.531 [2024-07-15 18:48:53.752140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.531 [2024-07-15 18:48:53.752183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.531 [2024-07-15 18:48:53.752196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.531 [2024-07-15 18:48:53.755884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.531 [2024-07-15 18:48:53.755927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.531 [2024-07-15 18:48:53.755940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.531 [2024-07-15 18:48:53.759546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.759588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.759618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.763381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.763424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.763436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.767008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.767049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.767061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.770459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.770499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.770512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.773974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.774017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.774031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.777582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.777622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.777634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.781587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.781641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.784759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.784800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.784812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.788112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.788152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.788165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.791960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.791998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.792012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.795643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.795689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.795702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.798769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.798812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.798824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.802476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.802518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.802530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.806147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.806188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.806200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.809271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.809314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.809327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.812977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.813017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.816714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.816757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.816770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.820356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.820398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.820411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.823543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.823587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.823600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.827358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.827401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.827414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.831634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.831679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.831692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.834720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.834756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.834769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.838852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.838896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.838908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.842142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.842185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.842197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.845641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.845682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.845695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.849338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.849379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.849392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.853281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.853324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.853337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.856875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.856917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.856929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.860976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.532 [2024-07-15 18:48:53.861019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.532 [2024-07-15 18:48:53.861031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.532 [2024-07-15 18:48:53.863627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.863667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.863679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.867367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.867407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.867420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.871975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.872014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.872026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.876598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.876645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.876658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.880843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.880887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.880900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.883339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.883377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.883389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.887793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.887840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.887852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.891034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.891074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.891087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.894708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.894748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.894761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.899338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.899382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.899394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.903110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.903151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.903163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.905973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.906010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.906022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.910746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.910792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.910805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.915434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.915480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.915493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.918738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.918778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.918790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.922718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.922761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.922773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.927040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.927080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.927092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.931690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.931738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.931751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.935828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.935872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.935885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.938349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.938386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.938398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.942858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.942905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.942918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.946933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.946990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.947003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.950921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.950973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.950986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.953519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.953555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.953567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.957691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.957731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.957743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.961224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.961265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.961278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.964513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.964553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.964565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.968455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.968496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.968508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.972763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.533 [2024-07-15 18:48:53.972809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.533 [2024-07-15 18:48:53.972821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.533 [2024-07-15 18:48:53.976731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:53.976773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:53.976786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:53.979999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:53.980036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:53.980048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:53.982962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:53.983000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:53.983012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:53.987004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:53.987043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:53.987056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:53.991440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:53.991485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:53.991498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:53.995509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:53.995548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:53.995562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:53.998472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:53.998511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:53.998523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:54.001831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:54.001871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:54.001883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:54.004849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:54.004888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:54.004900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.534 [2024-07-15 18:48:54.008824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.534 [2024-07-15 18:48:54.008870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.534 [2024-07-15 18:48:54.008882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.012879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.012925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.012938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.016433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.016477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.016489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.020366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.020415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.020427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.023574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.023617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.023630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.027537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.027581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.027594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.031040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.031082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.031095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.034137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.034175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.034188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.038819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.038865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.038878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.042732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.042776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.042788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.045778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.045819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.045832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.049678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.049722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.049734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.054119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.054164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.054177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.057211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.057253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.057265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.061170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.061216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.061228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.064759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.064804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.064816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.067695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.067739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.067752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.071498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.071541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.071553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.075710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.075754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.075767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.078811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.078852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.078865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.082865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.082908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.082920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.087351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.087397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.087410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.091660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.091705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.091718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.094440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.094478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.094490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.098116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.098157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.098170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.102422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.793 [2024-07-15 18:48:54.102467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.793 [2024-07-15 18:48:54.102496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.793 [2024-07-15 18:48:54.106541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.106592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.106605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.109295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.109336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.109349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.113255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.113300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.113313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.117232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.117279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.117292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.120680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.120725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.120738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.124685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.124731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.124744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.128017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.128059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.128072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.131494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.131536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.131549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.135556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.135602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.135615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.139618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.139665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.139677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.142237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.142280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.142310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.146860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.146908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.146920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.151637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.151700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.154425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.154465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.154477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.158312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.158356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.158369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.162625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.162674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.162687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.165286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.165331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.165343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.169735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.169782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.169795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.173138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.173185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.173198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.177191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.177241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.177253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.181350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.181401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.181416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.184799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.184847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.184859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.187859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.187905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.187918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.192737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.192792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.192806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.196080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.196126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.196139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.199795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.199843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.199856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.204062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.204107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.204119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.207522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.207565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.207578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.211266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.211314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.794 [2024-07-15 18:48:54.211326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.794 [2024-07-15 18:48:54.214629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.794 [2024-07-15 18:48:54.214672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.214684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.218738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.218789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.218802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.222667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.222713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.222726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.225916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.225970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.225983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.229630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.229672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.229702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.233017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.233059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.233071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.237438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.237494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.237508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.242103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.242151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.242165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.244596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.244654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.244667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.248502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.248547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.248560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.252631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.252678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.252690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.255897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.255940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.255965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.260053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.260096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.260108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.264746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.264795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.264826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.268041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.268081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.268111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.795 [2024-07-15 18:48:54.272056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:19.795 [2024-07-15 18:48:54.272101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.795 [2024-07-15 18:48:54.272115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.276474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.276520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.276534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.280709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.280756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.280769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.284849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.284896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.284909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.287565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.287606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.287619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.292094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.292140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.292153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.295542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.295586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.295616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.299133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.299173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.299185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.303380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.303428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.303441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.306520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.306564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.306576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.310449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.310495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.310509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.314419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.314467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.314479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.317655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.317700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.317714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.320856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.320918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.320932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.324929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.324988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.325020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.328140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.328183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.328213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.331524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.331568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.331581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.335440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.335487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.335499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.339130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.339176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.339188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.343021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.343064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.343077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.346468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.346513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.346526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.349570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.349620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.349633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.352985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.353026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.353039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.357139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.357183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.357195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.360100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.360142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.360155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.363739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.363782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.363813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.367686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.367733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.367746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.372095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.372146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.054 [2024-07-15 18:48:54.372159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.054 [2024-07-15 18:48:54.375822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.054 [2024-07-15 18:48:54.375865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.375877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.378951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.379003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.379015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.382854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.382896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.382909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.386495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.386537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.386551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.390464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.390510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.390524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.394063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.394107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.394120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.397150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.397192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.397205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.400332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.400372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.400402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.404324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.404371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.404385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.408579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.408627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.408657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.412736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.412784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.412815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.416633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.416675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.416688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.419443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.419483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.419495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.423598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.423641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.423653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.427959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.428013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.428044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.431295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.431339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.431351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.434831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.434875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.434888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.438734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.438781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.438811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.442266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.442316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.442330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.446117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.446166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.446180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.450222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.450271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.450285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.453884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.453931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.453957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.457494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.457535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.457547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.461203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.461245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.461258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.465175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.465221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.465233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.469081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.469132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.469146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.472721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.472762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.472774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.476429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.476474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.476486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.480725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.480774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.480787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.484112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.484159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.484172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.487361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.487404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.487416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.491381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.491426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.491438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.495714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.495761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.495774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.498723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.498764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.498777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.502616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.055 [2024-07-15 18:48:54.502663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.055 [2024-07-15 18:48:54.502677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.055 [2024-07-15 18:48:54.507062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.507107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.507119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.056 [2024-07-15 18:48:54.510464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.510507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.510520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.056 [2024-07-15 18:48:54.514230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.514275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.514288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.056 [2024-07-15 18:48:54.518255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.518297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.518327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.056 [2024-07-15 18:48:54.522707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.522753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.522767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.056 [2024-07-15 18:48:54.525883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.525927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.525940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.056 [2024-07-15 18:48:54.529404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.529448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.529474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.056 [2024-07-15 18:48:54.533328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.056 [2024-07-15 18:48:54.533373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.056 [2024-07-15 18:48:54.533403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.313 [2024-07-15 18:48:54.536415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6af380) 00:23:20.313 [2024-07-15 18:48:54.536459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.313 [2024-07-15 18:48:54.536473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.313 00:23:20.313 Latency(us) 00:23:20.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.313 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:20.313 nvme0n1 : 2.00 8537.83 1067.23 0.00 0.00 1870.48 526.63 5898.24 00:23:20.313 =================================================================================================================== 00:23:20.313 Total : 8537.83 1067.23 0.00 0.00 1870.48 526.63 5898.24 00:23:20.313 0 00:23:20.313 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:20.313 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:20.313 | .driver_specific 00:23:20.313 | .nvme_error 00:23:20.313 | .status_code 00:23:20.313 | .command_transient_transport_error' 00:23:20.313 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:20.313 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 551 > 0 )) 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94200 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94200 ']' 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94200 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94200 00:23:20.570 killing process with pid 94200 00:23:20.570 Received shutdown signal, test time was about 2.000000 seconds 00:23:20.570 00:23:20.570 Latency(us) 00:23:20.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.570 =================================================================================================================== 00:23:20.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94200' 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94200 00:23:20.570 18:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94200 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94285 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94285 /var/tmp/bperf.sock 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94285 ']' 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:20.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.570 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:20.827 [2024-07-15 18:48:55.084314] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:20.828 [2024-07-15 18:48:55.084388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94285 ] 00:23:20.828 [2024-07-15 18:48:55.216722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.828 [2024-07-15 18:48:55.309166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.084 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.084 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:21.084 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:21.084 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:21.364 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:21.364 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.364 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:21.364 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.364 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.364 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.622 nvme0n1 00:23:21.622 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:21.622 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.622 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:21.622 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.622 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:21.622 18:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.622 Running I/O for 2 seconds... 00:23:21.622 [2024-07-15 18:48:56.030427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fac10 00:23:21.622 [2024-07-15 18:48:56.031224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.031263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.622 [2024-07-15 18:48:56.040852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e7818 00:23:21.622 [2024-07-15 18:48:56.041808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.041846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.622 [2024-07-15 18:48:56.050635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e95a0 00:23:21.622 [2024-07-15 18:48:56.051517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.622 [2024-07-15 18:48:56.059852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f7da8 00:23:21.622 [2024-07-15 18:48:56.060753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.060787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.622 [2024-07-15 18:48:56.069413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f2d80 00:23:21.622 [2024-07-15 18:48:56.069924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.069964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.622 [2024-07-15 18:48:56.080035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f1430 00:23:21.622 [2024-07-15 18:48:56.081199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.081234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.622 [2024-07-15 18:48:56.088382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fb048 00:23:21.622 [2024-07-15 18:48:56.089812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.089859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.622 [2024-07-15 18:48:56.098404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e38d0 00:23:21.622 [2024-07-15 18:48:56.099153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.622 [2024-07-15 18:48:56.099185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.880 [2024-07-15 18:48:56.106704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f0bc0 00:23:21.880 [2024-07-15 18:48:56.107606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.880 [2024-07-15 18:48:56.107640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.880 [2024-07-15 18:48:56.115682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df988 00:23:21.881 [2024-07-15 18:48:56.116400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.116427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.124312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f4b08 00:23:21.881 [2024-07-15 18:48:56.124887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.124916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.135206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190dfdc0 00:23:21.881 [2024-07-15 18:48:56.136307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.136339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.143889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eb760 00:23:21.881 [2024-07-15 18:48:56.144841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.144872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.152851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f4b08 00:23:21.881 [2024-07-15 18:48:56.153870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.153916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.162675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f0bc0 00:23:21.881 [2024-07-15 18:48:56.163261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.163294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.171721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fe2e8 00:23:21.881 [2024-07-15 18:48:56.172570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.172599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.182135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f7538 00:23:21.881 [2024-07-15 18:48:56.183465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.183499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.188663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fe2e8 00:23:21.881 [2024-07-15 18:48:56.189236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.189262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.198231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e01f8 00:23:21.881 [2024-07-15 18:48:56.198919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.198953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.209422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fc998 00:23:21.881 [2024-07-15 18:48:56.210645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.210679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.217543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8618 00:23:21.881 [2024-07-15 18:48:56.219030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.219062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.225601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ef6a8 00:23:21.881 [2024-07-15 18:48:56.226200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.226229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.236596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fb048 00:23:21.881 [2024-07-15 18:48:56.237623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.237656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.245395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ecc78 00:23:21.881 [2024-07-15 18:48:56.246349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.246382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.254833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ef270 00:23:21.881 [2024-07-15 18:48:56.255807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.255838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.264244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fd208 00:23:21.881 [2024-07-15 18:48:56.264836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.264864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.273727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f9b30 00:23:21.881 [2024-07-15 18:48:56.274669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.274703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.282757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f1430 00:23:21.881 [2024-07-15 18:48:56.283602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.283627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.294015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190de038 00:23:21.881 [2024-07-15 18:48:56.295447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.295477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.303883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f6020 00:23:21.881 [2024-07-15 18:48:56.305375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.310682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f9b30 00:23:21.881 [2024-07-15 18:48:56.311428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.311452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.321853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f57b0 00:23:21.881 [2024-07-15 18:48:56.323216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.323245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.331340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e3d08 00:23:21.881 [2024-07-15 18:48:56.332594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.332624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.338783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ebfd0 00:23:21.881 [2024-07-15 18:48:56.339546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.339571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.347618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f9b30 00:23:21.881 [2024-07-15 18:48:56.348261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.348286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.881 [2024-07-15 18:48:56.356552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fa3a0 00:23:21.881 [2024-07-15 18:48:56.357191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.881 [2024-07-15 18:48:56.357215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.367784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fe2e8 00:23:22.140 [2024-07-15 18:48:56.368933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.368970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.376409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190de470 00:23:22.140 [2024-07-15 18:48:56.377321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.377355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.385333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190efae0 00:23:22.140 [2024-07-15 18:48:56.386272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.386304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.394553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f4298 00:23:22.140 [2024-07-15 18:48:56.395110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.395135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.403585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e1710 00:23:22.140 [2024-07-15 18:48:56.404388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.404419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.412559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fb8b8 00:23:22.140 [2024-07-15 18:48:56.413271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.413296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.421523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f6458 00:23:22.140 [2024-07-15 18:48:56.422087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.422113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.433313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f2948 00:23:22.140 [2024-07-15 18:48:56.434883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.434915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.439924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df550 00:23:22.140 [2024-07-15 18:48:56.440639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.440665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.449320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ea680 00:23:22.140 [2024-07-15 18:48:56.450104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.450132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.458865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f0ff8 00:23:22.140 [2024-07-15 18:48:56.459345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.459371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.468509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f2510 00:23:22.140 [2024-07-15 18:48:56.469080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.469105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.478235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e3498 00:23:22.140 [2024-07-15 18:48:56.478980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.479011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.487005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f3a28 00:23:22.140 [2024-07-15 18:48:56.487603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.487630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.496449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fa7d8 00:23:22.140 [2024-07-15 18:48:56.497138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.497164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.505187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e7c50 00:23:22.140 [2024-07-15 18:48:56.505787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.505813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.513840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ebb98 00:23:22.140 [2024-07-15 18:48:56.514285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.523363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fac10 00:23:22.140 [2024-07-15 18:48:56.523923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.523960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.532083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f92c0 00:23:22.140 [2024-07-15 18:48:56.532561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.532587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.543295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190feb58 00:23:22.140 [2024-07-15 18:48:56.544745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.140 [2024-07-15 18:48:56.544776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:22.140 [2024-07-15 18:48:56.549813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fac10 00:23:22.140 [2024-07-15 18:48:56.550394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.550419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:22.141 [2024-07-15 18:48:56.561446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e7818 00:23:22.141 [2024-07-15 18:48:56.562888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.562918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:22.141 [2024-07-15 18:48:56.567926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e5220 00:23:22.141 [2024-07-15 18:48:56.568638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.568662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:22.141 [2024-07-15 18:48:56.578881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ef270 00:23:22.141 [2024-07-15 18:48:56.579990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.580027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:22.141 [2024-07-15 18:48:56.587565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f7970 00:23:22.141 [2024-07-15 18:48:56.588518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.588548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:22.141 [2024-07-15 18:48:56.596251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed4e8 00:23:22.141 [2024-07-15 18:48:56.597093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.597118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:22.141 [2024-07-15 18:48:56.606417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f96f8 00:23:22.141 [2024-07-15 18:48:56.607625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.607654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:22.141 [2024-07-15 18:48:56.613733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fc128 00:23:22.141 [2024-07-15 18:48:56.614534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.141 [2024-07-15 18:48:56.614564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.623689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e3d08 00:23:22.400 [2024-07-15 18:48:56.624506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.624535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.634762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fac10 00:23:22.400 [2024-07-15 18:48:56.636236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.636265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.641329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ea248 00:23:22.400 [2024-07-15 18:48:56.642057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.642082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.652283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f96f8 00:23:22.400 [2024-07-15 18:48:56.653394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.653425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.661189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed0b0 00:23:22.400 [2024-07-15 18:48:56.662255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.662287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.670249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f1868 00:23:22.400 [2024-07-15 18:48:56.671138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.671164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.679405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8618 00:23:22.400 [2024-07-15 18:48:56.680287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.680317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.691045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e6fa8 00:23:22.400 [2024-07-15 18:48:56.692488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.692518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.697675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fa7d8 00:23:22.400 [2024-07-15 18:48:56.698410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.698441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.707923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fe2e8 00:23:22.400 [2024-07-15 18:48:56.708759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.708788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.719227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e5a90 00:23:22.400 [2024-07-15 18:48:56.720586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.720616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.729120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ec408 00:23:22.400 [2024-07-15 18:48:56.730675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.730706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.738692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fe2e8 00:23:22.400 [2024-07-15 18:48:56.740122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.740150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.747944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed4e8 00:23:22.400 [2024-07-15 18:48:56.749366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.749397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.757599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f0788 00:23:22.400 [2024-07-15 18:48:56.759116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.759147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.767169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190de470 00:23:22.400 [2024-07-15 18:48:56.768706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.768740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.777167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e3498 00:23:22.400 [2024-07-15 18:48:56.778697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.778729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.785214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed4e8 00:23:22.400 [2024-07-15 18:48:56.786297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.786332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.795369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eb760 00:23:22.400 [2024-07-15 18:48:56.796968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.796999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.806137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e5658 00:23:22.400 [2024-07-15 18:48:56.807400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.807432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.815155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f2510 00:23:22.400 [2024-07-15 18:48:56.816168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.816200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.823852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eaef0 00:23:22.400 [2024-07-15 18:48:56.824745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.824776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.832642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed0b0 00:23:22.400 [2024-07-15 18:48:56.833416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.833447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.843710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fa3a0 00:23:22.400 [2024-07-15 18:48:56.845129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.845160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.853031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f0350 00:23:22.400 [2024-07-15 18:48:56.854581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.854616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.860488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fda78 00:23:22.400 [2024-07-15 18:48:56.861405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.400 [2024-07-15 18:48:56.861435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:22.400 [2024-07-15 18:48:56.869151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8a50 00:23:22.401 [2024-07-15 18:48:56.869956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.401 [2024-07-15 18:48:56.869984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:22.401 [2024-07-15 18:48:56.878064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f1430 00:23:22.401 [2024-07-15 18:48:56.878914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.401 [2024-07-15 18:48:56.878953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:22.659 [2024-07-15 18:48:56.889121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df988 00:23:22.659 [2024-07-15 18:48:56.890311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.659 [2024-07-15 18:48:56.890342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.897834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fb8b8 00:23:22.660 [2024-07-15 18:48:56.898869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.898900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.906532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fda78 00:23:22.660 [2024-07-15 18:48:56.907458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.907489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.915181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190feb58 00:23:22.660 [2024-07-15 18:48:56.915960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.926133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ef270 00:23:22.660 [2024-07-15 18:48:56.927571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.927601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.932607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e01f8 00:23:22.660 [2024-07-15 18:48:56.933168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.933194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.944204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed4e8 00:23:22.660 [2024-07-15 18:48:56.945537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.945568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.952441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e9168 00:23:22.660 [2024-07-15 18:48:56.953506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.953538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.961338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f9f68 00:23:22.660 [2024-07-15 18:48:56.962431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.962462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.970615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed4e8 00:23:22.660 [2024-07-15 18:48:56.971678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.971708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.979520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ee5c8 00:23:22.660 [2024-07-15 18:48:56.980590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.988007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fcdd0 00:23:22.660 [2024-07-15 18:48:56.988837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.988870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:56.996928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e88f8 00:23:22.660 [2024-07-15 18:48:56.997780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:56.997811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.006537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e1b48 00:23:22.660 [2024-07-15 18:48:57.007494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.007526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.015656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fd640 00:23:22.660 [2024-07-15 18:48:57.016254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.016281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.024386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190feb58 00:23:22.660 [2024-07-15 18:48:57.024892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.024922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.034847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f7da8 00:23:22.660 [2024-07-15 18:48:57.035952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.035978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.043263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f1868 00:23:22.660 [2024-07-15 18:48:57.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.044261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.051947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ec840 00:23:22.660 [2024-07-15 18:48:57.052811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.052843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.062665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ef6a8 00:23:22.660 [2024-07-15 18:48:57.064024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.064054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.072185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f92c0 00:23:22.660 [2024-07-15 18:48:57.073698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.073729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.078979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f20d8 00:23:22.660 [2024-07-15 18:48:57.079707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.660 [2024-07-15 18:48:57.079735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:22.660 [2024-07-15 18:48:57.088519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fcdd0 00:23:22.660 [2024-07-15 18:48:57.089441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.661 [2024-07-15 18:48:57.089480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:22.661 [2024-07-15 18:48:57.099725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fa3a0 00:23:22.661 [2024-07-15 18:48:57.101254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.661 [2024-07-15 18:48:57.101285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:22.661 [2024-07-15 18:48:57.109590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fda78 00:23:22.661 [2024-07-15 18:48:57.111214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.661 [2024-07-15 18:48:57.111243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:22.661 [2024-07-15 18:48:57.116237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e5a90 00:23:22.661 [2024-07-15 18:48:57.116852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.661 [2024-07-15 18:48:57.116877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:22.661 [2024-07-15 18:48:57.127841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ec408 00:23:22.661 [2024-07-15 18:48:57.129234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.661 [2024-07-15 18:48:57.129264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:22.661 [2024-07-15 18:48:57.137037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f5be8 00:23:22.661 [2024-07-15 18:48:57.138547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.661 [2024-07-15 18:48:57.138579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:22.919 [2024-07-15 18:48:57.146947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f2d80 00:23:22.919 [2024-07-15 18:48:57.148380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.919 [2024-07-15 18:48:57.148411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.154424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e3060 00:23:22.920 [2024-07-15 18:48:57.155334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.155364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.163671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df118 00:23:22.920 [2024-07-15 18:48:57.164543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.164573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.172874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fb8b8 00:23:22.920 [2024-07-15 18:48:57.173374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.173401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.183362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e88f8 00:23:22.920 [2024-07-15 18:48:57.184487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.184518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.192211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ddc00 00:23:22.920 [2024-07-15 18:48:57.193249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.193283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.201624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f4298 00:23:22.920 [2024-07-15 18:48:57.202485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.202530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.210585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fc998 00:23:22.920 [2024-07-15 18:48:57.212032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.212067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.220620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8e88 00:23:22.920 [2024-07-15 18:48:57.221427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.221457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.229147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fb048 00:23:22.920 [2024-07-15 18:48:57.230124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.238710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fdeb0 00:23:22.920 [2024-07-15 18:48:57.239614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.239643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.247727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eaab8 00:23:22.920 [2024-07-15 18:48:57.248573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.248605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.258800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8e88 00:23:22.920 [2024-07-15 18:48:57.259972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.260006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.266239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e88f8 00:23:22.920 [2024-07-15 18:48:57.266875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.266906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.276434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e3060 00:23:22.920 [2024-07-15 18:48:57.277471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.277508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.285538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eee38 00:23:22.920 [2024-07-15 18:48:57.286556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.286594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.296554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190de8a8 00:23:22.920 [2024-07-15 18:48:57.298166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.298206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.303223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e23b8 00:23:22.920 [2024-07-15 18:48:57.303857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.303884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.314738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e1b48 00:23:22.920 [2024-07-15 18:48:57.316154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.316187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.323438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e49b0 00:23:22.920 [2024-07-15 18:48:57.324706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.324737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.332276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f6458 00:23:22.920 [2024-07-15 18:48:57.333433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.333470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.341317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fc128 00:23:22.920 [2024-07-15 18:48:57.342473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.920 [2024-07-15 18:48:57.342504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:22.920 [2024-07-15 18:48:57.350883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f1ca0 00:23:22.921 [2024-07-15 18:48:57.352161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.921 [2024-07-15 18:48:57.352191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:22.921 [2024-07-15 18:48:57.360193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ef270 00:23:22.921 [2024-07-15 18:48:57.361474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.921 [2024-07-15 18:48:57.361534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:22.921 [2024-07-15 18:48:57.369310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190de470 00:23:22.921 [2024-07-15 18:48:57.370584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.921 [2024-07-15 18:48:57.370615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:22.921 [2024-07-15 18:48:57.377423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed4e8 00:23:22.921 [2024-07-15 18:48:57.378879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.921 [2024-07-15 18:48:57.378911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:22.921 [2024-07-15 18:48:57.387589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e7818 00:23:22.921 [2024-07-15 18:48:57.388737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.921 [2024-07-15 18:48:57.388768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:22.921 [2024-07-15 18:48:57.394953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f3a28 00:23:22.921 [2024-07-15 18:48:57.395607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.921 [2024-07-15 18:48:57.395633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:23.181 [2024-07-15 18:48:57.405103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e9e10 00:23:23.181 [2024-07-15 18:48:57.406211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.181 [2024-07-15 18:48:57.406244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:23.181 [2024-07-15 18:48:57.413957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e6b70 00:23:23.181 [2024-07-15 18:48:57.414843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.181 [2024-07-15 18:48:57.414874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:23.181 [2024-07-15 18:48:57.423563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eb328 00:23:23.181 [2024-07-15 18:48:57.424562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.181 [2024-07-15 18:48:57.424597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:23.181 [2024-07-15 18:48:57.432564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8a50 00:23:23.181 [2024-07-15 18:48:57.433472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.181 [2024-07-15 18:48:57.433504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:23.181 [2024-07-15 18:48:57.441381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f7da8 00:23:23.181 [2024-07-15 18:48:57.442228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.442259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.450663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e8088 00:23:23.182 [2024-07-15 18:48:57.451195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.451223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.461156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ebfd0 00:23:23.182 [2024-07-15 18:48:57.462410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.462445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.470558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ea680 00:23:23.182 [2024-07-15 18:48:57.471837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.471867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.480207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190feb58 00:23:23.182 [2024-07-15 18:48:57.481642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.481671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.486845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eea00 00:23:23.182 [2024-07-15 18:48:57.487504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.487529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.496420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fc128 00:23:23.182 [2024-07-15 18:48:57.497202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.497229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.506097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fac10 00:23:23.182 [2024-07-15 18:48:57.507065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.507097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.515737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e9e10 00:23:23.182 [2024-07-15 18:48:57.516785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.516816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.525337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f20d8 00:23:23.182 [2024-07-15 18:48:57.526501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.526532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.534636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fc128 00:23:23.182 [2024-07-15 18:48:57.535794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.535825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.541981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fbcf0 00:23:23.182 [2024-07-15 18:48:57.542629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.542654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.553989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f4f40 00:23:23.182 [2024-07-15 18:48:57.555510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.555540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.560707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f7da8 00:23:23.182 [2024-07-15 18:48:57.561519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.561548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.570210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f0bc0 00:23:23.182 [2024-07-15 18:48:57.570986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.571014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.579134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e2c28 00:23:23.182 [2024-07-15 18:48:57.579891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.579918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.590074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e4de8 00:23:23.182 [2024-07-15 18:48:57.591233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.591265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.598752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e4de8 00:23:23.182 [2024-07-15 18:48:57.599775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.599807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.607430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ddc00 00:23:23.182 [2024-07-15 18:48:57.608345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.608377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.616112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed0b0 00:23:23.182 [2024-07-15 18:48:57.616888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.616919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.625405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e5ec8 00:23:23.182 [2024-07-15 18:48:57.625986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.626014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.634250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e99d8 00:23:23.182 [2024-07-15 18:48:57.634692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.634719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.643743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190de8a8 00:23:23.182 [2024-07-15 18:48:57.644278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.182 [2024-07-15 18:48:57.644305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:23.182 [2024-07-15 18:48:57.654269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ec408 00:23:23.182 [2024-07-15 18:48:57.655431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.183 [2024-07-15 18:48:57.655463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.663352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e99d8 00:23:23.442 [2024-07-15 18:48:57.664660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.664692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.672717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8618 00:23:23.442 [2024-07-15 18:48:57.674101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.674133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.681609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fda78 00:23:23.442 [2024-07-15 18:48:57.682887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.682918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.690529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df988 00:23:23.442 [2024-07-15 18:48:57.691665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.691692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.699989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f4b08 00:23:23.442 [2024-07-15 18:48:57.700902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.700932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.709210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e1710 00:23:23.442 [2024-07-15 18:48:57.710276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.710308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.718954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190dfdc0 00:23:23.442 [2024-07-15 18:48:57.719921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.719959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.729609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f8a50 00:23:23.442 [2024-07-15 18:48:57.731028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.731059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.739453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e2c28 00:23:23.442 [2024-07-15 18:48:57.740884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.740918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.746088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eaef0 00:23:23.442 [2024-07-15 18:48:57.746829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.746858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.755729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fd208 00:23:23.442 [2024-07-15 18:48:57.756541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.756570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.766736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e5658 00:23:23.442 [2024-07-15 18:48:57.768069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.768100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.773231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f35f0 00:23:23.442 [2024-07-15 18:48:57.773845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.773871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.784409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f7da8 00:23:23.442 [2024-07-15 18:48:57.785381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.785411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.794876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df118 00:23:23.442 [2024-07-15 18:48:57.796343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.442 [2024-07-15 18:48:57.796377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:23.442 [2024-07-15 18:48:57.803215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ef270 00:23:23.443 [2024-07-15 18:48:57.804303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.804336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.813895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df118 00:23:23.443 [2024-07-15 18:48:57.815159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.815193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.824133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e0ea0 00:23:23.443 [2024-07-15 18:48:57.825323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.825357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.833735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190de8a8 00:23:23.443 [2024-07-15 18:48:57.834928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.834969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.843689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f5378 00:23:23.443 [2024-07-15 18:48:57.844868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.844901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.851348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f5be8 00:23:23.443 [2024-07-15 18:48:57.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.852068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.862824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ed4e8 00:23:23.443 [2024-07-15 18:48:57.863660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.863695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.872870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190df550 00:23:23.443 [2024-07-15 18:48:57.873577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.873610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.886552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e7c50 00:23:23.443 [2024-07-15 18:48:57.888174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.888209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.896417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ec840 00:23:23.443 [2024-07-15 18:48:57.897686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.897721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.908989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eee38 00:23:23.443 [2024-07-15 18:48:57.910889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.910922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.443 [2024-07-15 18:48:57.916790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ec408 00:23:23.443 [2024-07-15 18:48:57.917584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.443 [2024-07-15 18:48:57.917631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.930804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190eb760 00:23:23.701 [2024-07-15 18:48:57.932700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.932734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.938736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f46d0 00:23:23.701 [2024-07-15 18:48:57.939544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.939577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.948722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f92c0 00:23:23.701 [2024-07-15 18:48:57.949747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.949789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.958532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e3060 00:23:23.701 [2024-07-15 18:48:57.959092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.959124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.967298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e27f0 00:23:23.701 [2024-07-15 18:48:57.967754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.967784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.978714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190e49b0 00:23:23.701 [2024-07-15 18:48:57.979930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.979972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.988571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f9b30 00:23:23.701 [2024-07-15 18:48:57.989708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.989741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:57.997836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190fd640 00:23:23.701 [2024-07-15 18:48:57.998788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:57.998825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:58.007153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190ec840 00:23:23.701 [2024-07-15 18:48:58.007931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:58.007978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:23.701 [2024-07-15 18:48:58.016620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4880) with pdu=0x2000190f0bc0 00:23:23.701 [2024-07-15 18:48:58.017403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.701 [2024-07-15 18:48:58.017433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:23.701 00:23:23.701 Latency(us) 00:23:23.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.701 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:23.701 nvme0n1 : 2.00 27138.43 106.01 0.00 0.00 4711.65 1880.26 14792.41 00:23:23.701 =================================================================================================================== 00:23:23.701 Total : 27138.43 106.01 0.00 0.00 4711.65 1880.26 14792.41 00:23:23.701 0 00:23:23.701 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:23.701 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:23.701 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:23.701 | .driver_specific 00:23:23.701 | .nvme_error 00:23:23.701 | .status_code 00:23:23.701 | .command_transient_transport_error' 00:23:23.701 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94285 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94285 ']' 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94285 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94285 00:23:23.959 killing process with pid 94285 00:23:23.959 Received shutdown signal, test time was about 2.000000 seconds 00:23:23.959 00:23:23.959 Latency(us) 00:23:23.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.959 =================================================================================================================== 00:23:23.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94285' 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94285 00:23:23.959 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94285 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94357 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94357 /var/tmp/bperf.sock 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94357 ']' 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:24.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.217 18:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:24.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:24.217 Zero copy mechanism will not be used. 00:23:24.217 [2024-07-15 18:48:58.619251] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:24.217 [2024-07-15 18:48:58.619324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94357 ] 00:23:24.475 [2024-07-15 18:48:58.754036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.475 [2024-07-15 18:48:58.850204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.410 18:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.669 nvme0n1 00:23:25.669 18:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:25.669 18:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.669 18:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:25.669 18:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.669 18:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:25.669 18:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:25.669 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:25.669 Zero copy mechanism will not be used. 00:23:25.669 Running I/O for 2 seconds... 00:23:25.669 [2024-07-15 18:49:00.144648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.669 [2024-07-15 18:49:00.144973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.669 [2024-07-15 18:49:00.145003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.669 [2024-07-15 18:49:00.148779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.669 [2024-07-15 18:49:00.149082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.669 [2024-07-15 18:49:00.149114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.153091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.153412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.153438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.157189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.157479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.157539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.161403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.161713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.161737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.165833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.166159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.166186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.170214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.170525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.170556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.174481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.174793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.174826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.178823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.179139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.179160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.183154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.183455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.183485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.187391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.187710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.187741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.191751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.192058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.192080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.196116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.196425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.196461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.200462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.200779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.200810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.204763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.205067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.205105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.208999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.209293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.209319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.213113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.213404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.213433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.217332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.217663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.217692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.221686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.222016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.222052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.225938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.929 [2024-07-15 18:49:00.226255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.929 [2024-07-15 18:49:00.226290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.929 [2024-07-15 18:49:00.230062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.230364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.230392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.234132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.234414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.234442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.238156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.238468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.238498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.242220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.242507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.242540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.246201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.246506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.246540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.250288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.250569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.250614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.254156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.254443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.254472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.258067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.258344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.258373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.262183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.262488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.262518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.266546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.266840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.266870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.270623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.270949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.271001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.274879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.275192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.275217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.279105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.279371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.279398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.283063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.283333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.283361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.287213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.287476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.287504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.291380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.291670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.291697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.295569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.295842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.295870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.299708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.299991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.300018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.303885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.304165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.304192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.307976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.308249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.308276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.312155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.312427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.312454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.316237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.316507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.316534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.320256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.320546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.324375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.324648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.324675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.328524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.328796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.328823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.332523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.332791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.332818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.336549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.336828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.336856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.340579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.340878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.930 [2024-07-15 18:49:00.340908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.930 [2024-07-15 18:49:00.344721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.930 [2024-07-15 18:49:00.345021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.345050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.348801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.349079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.349106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.352969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.353277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.353302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.357132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.357418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.357471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.361375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.361704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.361734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.365706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.366007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.366031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.369845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.370143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.370168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.374102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.374383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.374408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.378312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.378591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.378616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.382420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.382711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.382730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.386660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.386923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.386957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.390773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.391048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.391074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.394976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.395251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.395277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.399119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.399416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.399445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.403446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.403752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.403781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.931 [2024-07-15 18:49:00.407793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:25.931 [2024-07-15 18:49:00.408098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.931 [2024-07-15 18:49:00.408129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.412200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.412504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.412534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.416451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.416739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.416769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.420537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.420820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.420851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.424758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.425063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.425092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.429007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.429309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.429337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.433148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.433444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.433480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.437298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.437593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.437621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.441436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.441758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.441787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.445417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.445723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.445751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.449554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.449841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.449883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.453768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.454059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.454085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.458010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.458318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.458347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.462023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.462294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.462335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.466106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.466388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.466413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.470009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.470271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.470293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.473872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.474145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.474169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.477725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.477996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.478019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.481564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.481825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.481849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.485404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.485677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.485700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.489260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.489533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.489556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.493211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.493487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.493514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.497091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.497350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.497373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.501110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.501380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-07-15 18:49:00.501419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.191 [2024-07-15 18:49:00.505247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.191 [2024-07-15 18:49:00.505559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.505588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.509207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.509502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.509530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.513173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.513438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.513457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.517166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.517468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.521159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.521420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.521447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.525134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.525397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.525438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.529148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.529456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.529493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.533337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.533630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.533660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.537513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.537794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.537820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.541688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.541983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.542005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.545727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.546026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.546050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.549738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.550036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.550061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.553907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.554208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.554234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.557904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.558178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.558219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.561835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.562114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.562155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.565783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.566058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.566085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.569696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.569979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.570020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.573711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.574004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.574028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.577757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.578052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.578077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.581768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.582059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.582084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.585835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.586161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.586198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.590073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.590354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.590379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.594069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.594330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.594370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.598066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.598332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.598371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.602023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.602289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.602331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.605969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.606264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.606290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.609888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.610157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.610178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.613766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.614040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.614075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.192 [2024-07-15 18:49:00.617633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.192 [2024-07-15 18:49:00.617900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-07-15 18:49:00.617923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.621662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.621950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.622000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.625694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.625982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.626007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.629831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.630124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.630153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.634165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.634460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.634486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.638433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.638735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.638781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.642562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.642855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.642882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.646702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.647007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.647026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.650871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.651147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.651173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.655098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.655377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.655406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.659197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.659461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.659489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.663343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.663640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.663667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.667394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.667689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.667718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.193 [2024-07-15 18:49:00.671658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.193 [2024-07-15 18:49:00.671940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-07-15 18:49:00.671975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.453 [2024-07-15 18:49:00.675825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.453 [2024-07-15 18:49:00.676140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.453 [2024-07-15 18:49:00.676169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.453 [2024-07-15 18:49:00.680187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.453 [2024-07-15 18:49:00.680503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.453 [2024-07-15 18:49:00.680529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.453 [2024-07-15 18:49:00.684322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.453 [2024-07-15 18:49:00.684589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.453 [2024-07-15 18:49:00.684616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.453 [2024-07-15 18:49:00.688428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.453 [2024-07-15 18:49:00.688704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.453 [2024-07-15 18:49:00.688731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.453 [2024-07-15 18:49:00.692507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.453 [2024-07-15 18:49:00.692775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.453 [2024-07-15 18:49:00.692802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.453 [2024-07-15 18:49:00.696554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.453 [2024-07-15 18:49:00.696833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.453 [2024-07-15 18:49:00.696860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.453 [2024-07-15 18:49:00.700526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.453 [2024-07-15 18:49:00.700792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.700819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.704486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.704750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.704777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.708339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.708593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.708620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.712353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.712635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.712662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.716319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.716584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.716612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.720313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.720578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.720606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.724291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.724565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.724592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.728213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.728485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.728505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.732305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.732580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.732599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.736512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.736789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.736816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.740614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.740884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.740909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.744742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.745041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.745065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.749034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.749331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.749362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.753211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.753538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.753568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.757433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.757761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.757802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.761764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.762080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.762106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.766061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.766364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.766395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.770385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.770688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.770718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.774834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.775148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.775178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.779181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.779473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.779503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.783438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.783722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.783748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.787785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.788084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.788108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.792075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.792376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.792402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.796237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.796531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.796564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.800372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.800663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.800687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.804443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.804719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.804746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.808470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.808739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.808759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.812538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.812807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.812835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.816551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.816820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.454 [2024-07-15 18:49:00.816848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.454 [2024-07-15 18:49:00.820587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.454 [2024-07-15 18:49:00.820850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.820877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.824626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.824890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.824918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.828566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.828829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.828856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.832627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.832895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.832922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.836715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.836995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.840857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.841163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.841190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.844995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.845261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.845289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.849129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.849401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.849422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.853107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.853366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.853393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.857212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.857544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.857571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.861285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.861590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.865278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.865582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.865610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.869430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.869737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.869765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.873570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.873891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.873918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.877814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.878121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.878142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.881838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.882139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.882164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.885933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.886229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.886254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.890071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.890373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.890403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.894230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.894537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.894575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.898557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.898876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.898919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.902938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.903255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.903289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.907348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.907630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.907659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.911717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.912055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.912086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.916420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.916736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.916765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.921293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.921645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.921677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.925552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.925858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.925894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.929712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.930010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.930044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.455 [2024-07-15 18:49:00.933876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.455 [2024-07-15 18:49:00.934166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.455 [2024-07-15 18:49:00.934194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.715 [2024-07-15 18:49:00.937933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.715 [2024-07-15 18:49:00.938233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.715 [2024-07-15 18:49:00.938258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.715 [2024-07-15 18:49:00.942021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.715 [2024-07-15 18:49:00.942302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.715 [2024-07-15 18:49:00.942331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.715 [2024-07-15 18:49:00.946051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.715 [2024-07-15 18:49:00.946330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.715 [2024-07-15 18:49:00.946362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.715 [2024-07-15 18:49:00.950000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.715 [2024-07-15 18:49:00.950262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.950281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.953902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.954190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.954216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.957826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.958102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.958123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.961642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.961902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.961921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.965612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.965882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.965906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.969667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.969980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.970018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.973607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.973876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.973909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.977625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.977909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.977936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.981563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.981849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.981881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.985589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.985869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.985901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.989622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.989905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.989934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.993752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.994043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.994071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:00.997821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:00.998117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:00.998142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.002028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.002317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.002345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.006304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.006591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.006619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.010447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.010738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.010768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.014535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.014839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.014869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.018667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.018990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.019038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.022758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.023033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.023061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.026834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.027112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.027139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.030856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.031134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.031158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.034826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.035098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.035119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.038779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.039056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.039089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.042808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.043132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.043167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.046965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.047264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.047296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.051243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.051533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.051562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.055445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.055716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.055747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.059549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.059815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.716 [2024-07-15 18:49:01.059841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.716 [2024-07-15 18:49:01.063486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.716 [2024-07-15 18:49:01.063745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.063771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.067478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.067758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.067791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.071587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.071869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.071902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.075748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.076025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.076052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.079847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.080128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.080153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.083932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.084206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.084232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.087815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.088090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.088116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.091777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.092052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.092080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.095863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.096148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.096175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.099896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.100190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.100216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.104000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.104304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.104329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.108026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.108325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.108355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.112095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.112357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.112385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.116128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.116392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.116419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.120149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.120416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.120443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.124219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.124484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.124511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.128305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.128569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.128596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.132368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.132636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.132663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.136457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.136746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.136775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.140516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.140821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.140849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.144684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.144993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.145020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.148671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.148955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.148975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.152674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.152941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.152976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.156589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.156853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.156882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.160785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.161091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.161121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.165031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.165309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.165337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.169022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.169283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.169309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.172967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.173226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.173253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.176892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.177170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.717 [2024-07-15 18:49:01.177190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.717 [2024-07-15 18:49:01.180749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.717 [2024-07-15 18:49:01.181035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.718 [2024-07-15 18:49:01.181062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.718 [2024-07-15 18:49:01.184657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.718 [2024-07-15 18:49:01.184919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.718 [2024-07-15 18:49:01.184956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.718 [2024-07-15 18:49:01.188484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.718 [2024-07-15 18:49:01.188745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.718 [2024-07-15 18:49:01.188773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.718 [2024-07-15 18:49:01.192326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.718 [2024-07-15 18:49:01.192585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.718 [2024-07-15 18:49:01.192612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.718 [2024-07-15 18:49:01.196205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.718 [2024-07-15 18:49:01.196470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.718 [2024-07-15 18:49:01.196489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.200182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.200461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.200489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.204290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.204563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.204590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.208362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.208638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.208666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.212367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.212633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.212654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.216357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.216624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.216651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.220444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.220732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.220766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.224682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.224989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.225022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.228734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.229041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.229070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.233066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.233395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.233422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.237392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.237723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.237753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.241866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.242192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.242218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.246325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.246634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.246661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.250631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.250960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.250995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.254905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.255212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.255241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.259127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.259421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.259442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.263354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.263644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.263667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.267583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.267878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.267912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.271742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.272052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.272078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.276056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.276355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.276381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.280282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.280575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.280602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.284509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.284802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.978 [2024-07-15 18:49:01.284831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.978 [2024-07-15 18:49:01.288788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.978 [2024-07-15 18:49:01.289093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.289129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.292979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.293283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.293312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.297221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.297537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.297576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.301488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.301815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.301843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.305844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.306170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.306199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.310327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.310650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.310681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.314737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.315044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.315082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.319118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.319418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.319444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.323414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.323701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.323737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.327777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.328081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.328106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.332117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.332416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.332437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.336495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.336785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.336822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.340848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.341162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.341190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.345287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.345615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.345645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.349604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.349912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.349956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.353926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.354227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.354255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.358156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.358453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.362238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.362524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.362556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.366522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.366840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.366873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.370740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.371039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.371064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.374900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.375208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.375240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.379056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.379348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.379377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.383280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.383577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.383606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.387700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.388138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.388171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.392179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.392506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.392537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.396548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.396855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.396885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.400763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.401084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.401113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.404879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.405190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.405219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.408957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.409222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.979 [2024-07-15 18:49:01.409245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.979 [2024-07-15 18:49:01.413032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.979 [2024-07-15 18:49:01.413313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.413339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.417263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.417563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.417591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.421584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.421912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.421942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.425586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.425855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.425897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.429535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.429834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.429860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.433617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.433889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.433930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.437872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.438197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.438220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.442177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.442472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.442517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.446355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.446672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.446699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.450565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.450842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.450868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.980 [2024-07-15 18:49:01.454656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:26.980 [2024-07-15 18:49:01.454926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.980 [2024-07-15 18:49:01.454969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.458855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.459161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.459182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.463038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.463331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.463367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.467261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.467561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.467590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.471541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.471845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.471876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.475781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.476086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.476121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.480022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.480315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.480350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.484227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.484525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.484554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.488340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.488639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.488669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.240 [2024-07-15 18:49:01.492570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.240 [2024-07-15 18:49:01.492864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.240 [2024-07-15 18:49:01.492896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.496892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.497251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.497283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.501249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.501565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.501597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.505440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.505752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.505781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.509537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.509863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.509900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.513676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.513981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.514003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.517811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.518118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.518142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.521930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.522231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.522265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.526051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.526341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.526375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.530169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.530461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.530495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.534327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.534609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.534636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.538545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.538813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.538834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.542674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.542982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.543017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.546870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.547171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.547214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.551104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.551387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.551417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.555309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.555607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.555636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.559441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.559711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.559740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.563413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.563679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.563708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.567362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.567675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.567695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.571511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.571784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.571804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.575682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.575995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.576017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.580012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.580300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.580336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.584280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.584573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.584604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.588494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.588792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.588824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.592855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.593160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.593189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.597066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.597365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.597395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.601157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.601446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.601482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.605160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.605439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.605475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.609227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.609516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.609538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.613322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.241 [2024-07-15 18:49:01.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.241 [2024-07-15 18:49:01.613656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.241 [2024-07-15 18:49:01.617407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.617715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.617751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.621675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.621992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.622027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.625888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.626212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.626247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.630077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.630374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.630408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.634150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.634444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.634477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.638257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.638546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.638581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.642302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.642591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.642626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.646464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.646782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.646816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.650594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.650878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.650928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.654628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.654945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.654991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.658657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.658921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.658959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.662514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.662778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.662807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.666430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.666695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.666727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.670318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.670580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.670599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.674173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.674437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.674466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.678058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.678321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.678341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.681914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.682191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.682218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.685855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.686159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.686183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.690038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.690330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.690359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.694301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.694652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.694688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.698436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.698754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.698784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.702646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.702919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.702957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.706660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.706970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.710610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.710890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.710917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.714713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.714994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.715020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.242 [2024-07-15 18:49:01.718907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.242 [2024-07-15 18:49:01.719213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.242 [2024-07-15 18:49:01.719246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.723044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.723318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.723347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.727085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.727377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.727412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.731247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.731520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.731548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.735294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.735566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.735595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.739345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.739615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.739647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.743412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.743715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.743763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.747610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.747886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.747914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.751728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.752022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.752050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.755754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.756044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.756071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.759780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.760071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.760091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.763878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.764171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.764202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.768092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.768388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.768415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.772431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.772741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.772770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.776912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.777250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.777286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.502 [2024-07-15 18:49:01.781313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.502 [2024-07-15 18:49:01.781658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.502 [2024-07-15 18:49:01.781688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.785795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.786134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.786160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.790287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.790610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.790635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.794784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.795128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.795158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.799339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.799656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.799699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.803834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.804160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.804206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.808240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.808563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.808600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.812673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.812998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.813028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.816980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.817310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.817338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.821424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.821776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.821807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.825888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.826224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.826262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.830419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.830745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.830781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.834874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.835194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.835230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.839323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.839628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.839659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.843742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.844069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.844098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.848038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.848370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.848409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.852446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.852770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.852802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.856883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.857193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.857227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.861270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.861582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.861604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.865564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.865861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.865883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.869842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.870169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.870192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.874218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.874513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.874534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.878495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.878810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.878842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.882897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.883221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.883253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.887153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.887458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.887489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.891340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.891660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.891698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.895580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.895869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.895897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.899694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.899984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.900022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.903882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.503 [2024-07-15 18:49:01.904173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.503 [2024-07-15 18:49:01.904194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.503 [2024-07-15 18:49:01.907996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.908296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.908327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.912084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.912358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.912378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.916099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.916385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.916406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.920196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.920503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.920542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.924638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.924965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.924996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.929039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.929345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.929376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.933398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.933742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.933780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.937958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.938294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.938318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.942323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.942643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.942686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.946819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.947178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.947211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.951380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.951694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.951725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.955771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.956118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.956158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.960154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.960458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.960480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.964549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.964873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.964896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.968828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.969146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.969168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.973144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.973442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.973474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.977559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.977875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.977897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.504 [2024-07-15 18:49:01.981991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.504 [2024-07-15 18:49:01.982295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.504 [2024-07-15 18:49:01.982318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:01.986389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:01.986709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:01.986745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:01.990918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:01.991250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:01.991273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:01.995277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:01.995578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:01.995602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:01.999568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:01.999864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:01.999896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.003838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.004132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.004154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.007930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.008234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.008256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.012260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.012546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.012567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.016300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.016576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.016597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.020400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.020677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.020696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.024444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.024739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.024760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.028660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.028972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.028994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.032934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.033251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.033282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.037202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.037524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.037547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.041487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.041791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.041829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.045762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.046083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.046108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.050038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.050343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.050372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.054259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.054560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.054589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.058526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.058834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.058861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.062744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.063041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.063069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.067001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.067291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.067318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.071382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.071679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.071712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.075770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.076082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.076114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.080109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.080402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.080429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.084463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.084771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.084801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.088877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.765 [2024-07-15 18:49:02.089191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.765 [2024-07-15 18:49:02.089218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.765 [2024-07-15 18:49:02.093105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.093398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.093428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.097341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.097648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.097676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.101480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.101771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.101799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.105656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.105982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.109792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.110093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.110122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.113903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.114202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.114230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.117900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.118202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.118238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.121924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.122221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.122248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.126120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.126404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.126425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.130186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.130465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.130493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.134265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.134552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.134572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.766 [2024-07-15 18:49:02.138387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22c4bc0) with pdu=0x2000190fef90 00:23:27.766 [2024-07-15 18:49:02.138669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.766 [2024-07-15 18:49:02.138690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.766 00:23:27.766 Latency(us) 00:23:27.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.766 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:27.766 nvme0n1 : 2.00 7446.31 930.79 0.00 0.00 2144.82 1209.30 4774.77 00:23:27.766 =================================================================================================================== 00:23:27.766 Total : 7446.31 930.79 0.00 0.00 2144.82 1209.30 4774.77 00:23:27.766 0 00:23:27.766 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:27.766 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:27.766 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:27.766 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:27.766 | .driver_specific 00:23:27.766 | .nvme_error 00:23:27.766 | .status_code 00:23:27.766 | .command_transient_transport_error' 00:23:28.025 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 480 > 0 )) 00:23:28.025 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94357 00:23:28.025 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94357 ']' 00:23:28.025 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94357 00:23:28.025 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:28.025 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.025 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94357 00:23:28.282 killing process with pid 94357 00:23:28.282 Received shutdown signal, test time was about 2.000000 seconds 00:23:28.282 00:23:28.282 Latency(us) 00:23:28.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.282 =================================================================================================================== 00:23:28.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94357' 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94357 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94357 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94065 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94065 ']' 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94065 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94065 00:23:28.282 killing process with pid 94065 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94065' 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94065 00:23:28.282 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94065 00:23:28.540 00:23:28.540 real 0m17.395s 00:23:28.540 user 0m32.327s 00:23:28.540 sys 0m5.096s 00:23:28.540 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.540 ************************************ 00:23:28.540 18:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:28.540 END TEST nvmf_digest_error 00:23:28.540 ************************************ 00:23:28.540 18:49:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:23:28.540 18:49:02 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:28.540 18:49:02 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:28.540 18:49:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.540 18:49:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.798 rmmod nvme_tcp 00:23:28.798 rmmod nvme_fabrics 00:23:28.798 rmmod nvme_keyring 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 94065 ']' 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 94065 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 94065 ']' 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 94065 00:23:28.798 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (94065) - No such process 00:23:28.798 Process with pid 94065 is not found 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 94065 is not found' 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:28.798 00:23:28.798 real 0m36.738s 00:23:28.798 user 1m7.075s 00:23:28.798 sys 0m10.732s 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.798 ************************************ 00:23:28.798 END TEST nvmf_digest 00:23:28.798 18:49:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.798 ************************************ 00:23:28.798 18:49:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:28.798 18:49:03 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:23:28.798 18:49:03 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:23:28.798 18:49:03 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:28.798 18:49:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:28.798 18:49:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.798 18:49:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.798 ************************************ 00:23:28.798 START TEST nvmf_mdns_discovery 00:23:28.798 ************************************ 00:23:28.798 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:28.798 * Looking for test storage... 00:23:29.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.056 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:29.057 Cannot find device "nvmf_tgt_br" 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.057 Cannot find device "nvmf_tgt_br2" 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:29.057 Cannot find device "nvmf_tgt_br" 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:29.057 Cannot find device "nvmf_tgt_br2" 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.057 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:29.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:23:29.338 00:23:29.338 --- 10.0.0.2 ping statistics --- 00:23:29.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.338 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:29.338 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.338 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:29.338 00:23:29.338 --- 10.0.0.3 ping statistics --- 00:23:29.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.338 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:29.338 00:23:29.338 --- 10.0.0.1 ping statistics --- 00:23:29.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.338 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94646 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94646 00:23:29.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94646 ']' 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.338 18:49:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.338 [2024-07-15 18:49:03.751406] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:29.338 [2024-07-15 18:49:03.751778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.596 [2024-07-15 18:49:03.888618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.596 [2024-07-15 18:49:03.999254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.596 [2024-07-15 18:49:03.999451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.596 [2024-07-15 18:49:03.999477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.596 [2024-07-15 18:49:03.999490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.596 [2024-07-15 18:49:03.999500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.596 [2024-07-15 18:49:03.999537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 [2024-07-15 18:49:04.920711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 [2024-07-15 18:49:04.928818] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 null0 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 null1 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 null2 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 null3 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94707 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94707 /tmp/host.sock 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94707 ']' 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.527 18:49:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.785 [2024-07-15 18:49:05.043417] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:30.785 [2024-07-15 18:49:05.043986] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94707 ] 00:23:30.785 [2024-07-15 18:49:05.193330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.043 [2024-07-15 18:49:05.310195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.609 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.609 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:31.609 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:31.609 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:23:31.609 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:23:31.868 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94732 00:23:31.868 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:31.868 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:31.868 18:49:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:23:31.868 Process 983 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:31.868 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:31.868 Successfully dropped root privileges. 00:23:31.868 avahi-daemon 0.8 starting up. 00:23:31.868 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:32.803 Successfully called chroot(). 00:23:32.803 Successfully dropped remaining capabilities. 00:23:32.803 No service file found in /etc/avahi/services. 00:23:32.803 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:32.803 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:32.803 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:32.803 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:32.803 Network interface enumeration completed. 00:23:32.803 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:23:32.803 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:32.803 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:23:32.803 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:32.803 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2264723616. 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:32.803 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.062 [2024-07-15 18:49:07.456240] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.062 [2024-07-15 18:49:07.465399] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.062 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.063 [2024-07-15 18:49:07.505358] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.063 [2024-07-15 18:49:07.513318] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.063 18:49:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:33.998 [2024-07-15 18:49:08.356263] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:34.566 [2024-07-15 18:49:08.956287] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:34.566 [2024-07-15 18:49:08.956325] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:34.566 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:34.566 cookie is 0 00:23:34.566 is_local: 1 00:23:34.566 our_own: 0 00:23:34.566 wide_area: 0 00:23:34.566 multicast: 1 00:23:34.566 cached: 1 00:23:34.825 [2024-07-15 18:49:09.056268] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:34.825 [2024-07-15 18:49:09.056295] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:34.825 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:34.825 cookie is 0 00:23:34.825 is_local: 1 00:23:34.825 our_own: 0 00:23:34.825 wide_area: 0 00:23:34.825 multicast: 1 00:23:34.825 cached: 1 00:23:34.825 [2024-07-15 18:49:09.056308] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:23:34.825 [2024-07-15 18:49:09.156285] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:34.825 [2024-07-15 18:49:09.156313] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:34.825 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:34.825 cookie is 0 00:23:34.825 is_local: 1 00:23:34.825 our_own: 0 00:23:34.825 wide_area: 0 00:23:34.825 multicast: 1 00:23:34.825 cached: 1 00:23:34.825 [2024-07-15 18:49:09.256280] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:34.825 [2024-07-15 18:49:09.256313] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:34.825 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:34.825 cookie is 0 00:23:34.825 is_local: 1 00:23:34.825 our_own: 0 00:23:34.825 wide_area: 0 00:23:34.825 multicast: 1 00:23:34.825 cached: 1 00:23:34.825 [2024-07-15 18:49:09.256325] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:23:35.793 [2024-07-15 18:49:09.966523] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:35.793 [2024-07-15 18:49:09.966555] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:35.793 [2024-07-15 18:49:09.966571] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:35.793 [2024-07-15 18:49:10.052652] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:35.793 [2024-07-15 18:49:10.109751] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:35.793 [2024-07-15 18:49:10.109779] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:35.793 [2024-07-15 18:49:10.166221] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:35.793 [2024-07-15 18:49:10.166242] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:35.793 [2024-07-15 18:49:10.166255] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.793 [2024-07-15 18:49:10.252321] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:36.051 [2024-07-15 18:49:10.308075] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:36.051 [2024-07-15 18:49:10.308099] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:38.578 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.579 18:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.513 18:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.771 [2024-07-15 18:49:14.039928] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.771 [2024-07-15 18:49:14.040774] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:39.771 [2024-07-15 18:49:14.040965] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:39.771 [2024-07-15 18:49:14.041132] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:39.771 [2024-07-15 18:49:14.041149] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:39.771 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.772 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.772 [2024-07-15 18:49:14.047892] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:39.772 [2024-07-15 18:49:14.048790] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:39.772 [2024-07-15 18:49:14.048971] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:39.772 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.772 18:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:39.772 [2024-07-15 18:49:14.178860] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:39.772 [2024-07-15 18:49:14.179090] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:39.772 [2024-07-15 18:49:14.241184] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:39.772 [2024-07-15 18:49:14.241221] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:39.772 [2024-07-15 18:49:14.241228] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:39.772 [2024-07-15 18:49:14.241250] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:39.772 [2024-07-15 18:49:14.241285] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:39.772 [2024-07-15 18:49:14.241292] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.772 [2024-07-15 18:49:14.241298] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:39.772 [2024-07-15 18:49:14.241310] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.030 [2024-07-15 18:49:14.286943] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:40.030 [2024-07-15 18:49:14.286977] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:40.030 [2024-07-15 18:49:14.287014] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:40.030 [2024-07-15 18:49:14.287021] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:40.596 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:40.596 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.596 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:40.596 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.596 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.596 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:40.596 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:23:40.911 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.912 [2024-07-15 18:49:15.357127] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:40.912 [2024-07-15 18:49:15.357164] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:40.912 [2024-07-15 18:49:15.357195] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:40.912 [2024-07-15 18:49:15.357207] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.912 [2024-07-15 18:49:15.357627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.357657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.357669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.357680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.357690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.357701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.357712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.357721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.357732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.912 [2024-07-15 18:49:15.365119] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:40.912 [2024-07-15 18:49:15.365160] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:40.912 [2024-07-15 18:49:15.367576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.912 18:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:40.912 [2024-07-15 18:49:15.369752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.369781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.369793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.369804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.369815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.369825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.369836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.912 [2024-07-15 18:49:15.369845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.912 [2024-07-15 18:49:15.369856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:40.912 [2024-07-15 18:49:15.377595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.912 [2024-07-15 18:49:15.377700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.912 [2024-07-15 18:49:15.377716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:40.912 [2024-07-15 18:49:15.377727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:40.912 [2024-07-15 18:49:15.377742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:40.912 [2024-07-15 18:49:15.377756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.912 [2024-07-15 18:49:15.377765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.912 [2024-07-15 18:49:15.377777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.912 [2024-07-15 18:49:15.377791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.912 [2024-07-15 18:49:15.379723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:40.912 [2024-07-15 18:49:15.387646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.912 [2024-07-15 18:49:15.387713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.912 [2024-07-15 18:49:15.387727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:40.912 [2024-07-15 18:49:15.387736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:40.912 [2024-07-15 18:49:15.387755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:40.912 [2024-07-15 18:49:15.387767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.912 [2024-07-15 18:49:15.387776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.912 [2024-07-15 18:49:15.387785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.912 [2024-07-15 18:49:15.387796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.912 [2024-07-15 18:49:15.389734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:40.912 [2024-07-15 18:49:15.389795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.912 [2024-07-15 18:49:15.389809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:40.912 [2024-07-15 18:49:15.389819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:40.912 [2024-07-15 18:49:15.389832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:40.912 [2024-07-15 18:49:15.389845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:40.912 [2024-07-15 18:49:15.389854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:40.912 [2024-07-15 18:49:15.389863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:40.912 [2024-07-15 18:49:15.389875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.397688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.180 [2024-07-15 18:49:15.397759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.397776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.180 [2024-07-15 18:49:15.397786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.397800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.397814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.397823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.397833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.180 [2024-07-15 18:49:15.397846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.399773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.180 [2024-07-15 18:49:15.399842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.399857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.180 [2024-07-15 18:49:15.399867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.399880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.399893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.399902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.399912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.180 [2024-07-15 18:49:15.399924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.407733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.180 [2024-07-15 18:49:15.407814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.407828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.180 [2024-07-15 18:49:15.407837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.407850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.407862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.407870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.407879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.180 [2024-07-15 18:49:15.407908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.409816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.180 [2024-07-15 18:49:15.409878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.409893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.180 [2024-07-15 18:49:15.409903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.409915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.409928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.409937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.409956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.180 [2024-07-15 18:49:15.409969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.417777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.180 [2024-07-15 18:49:15.417847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.417861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.180 [2024-07-15 18:49:15.417870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.417882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.417894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.417902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.417911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.180 [2024-07-15 18:49:15.417940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.419853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.180 [2024-07-15 18:49:15.419909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.419922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.180 [2024-07-15 18:49:15.419930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.419942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.419962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.419970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.419979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.180 [2024-07-15 18:49:15.419990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.427823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.180 [2024-07-15 18:49:15.427892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.427907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.180 [2024-07-15 18:49:15.427916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.427929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.427941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.427957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.427966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.180 [2024-07-15 18:49:15.427977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.429891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.180 [2024-07-15 18:49:15.429960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.429975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.180 [2024-07-15 18:49:15.429984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.429998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.430011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.430020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.180 [2024-07-15 18:49:15.430029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.180 [2024-07-15 18:49:15.430042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.180 [2024-07-15 18:49:15.437866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.180 [2024-07-15 18:49:15.437924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.180 [2024-07-15 18:49:15.437937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.180 [2024-07-15 18:49:15.437955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.180 [2024-07-15 18:49:15.437967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.180 [2024-07-15 18:49:15.437979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.180 [2024-07-15 18:49:15.438004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.438014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.181 [2024-07-15 18:49:15.438026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.439927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.181 [2024-07-15 18:49:15.439988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.440001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.181 [2024-07-15 18:49:15.440010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.440022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.440033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.440041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.440050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.181 [2024-07-15 18:49:15.440061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.447904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.181 [2024-07-15 18:49:15.447965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.447979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.181 [2024-07-15 18:49:15.447988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.448000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.448012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.448020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.448028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.181 [2024-07-15 18:49:15.448040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.449968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.181 [2024-07-15 18:49:15.450028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.450041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.181 [2024-07-15 18:49:15.450049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.450061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.450072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.450080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.450089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.181 [2024-07-15 18:49:15.450100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.457943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.181 [2024-07-15 18:49:15.458017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.458032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.181 [2024-07-15 18:49:15.458041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.458054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.458066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.458074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.458083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.181 [2024-07-15 18:49:15.458095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.460010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.181 [2024-07-15 18:49:15.460065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.460078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.181 [2024-07-15 18:49:15.460087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.460099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.460110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.460118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.460127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.181 [2024-07-15 18:49:15.460138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.467992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.181 [2024-07-15 18:49:15.468048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.468062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.181 [2024-07-15 18:49:15.468070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.468082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.468094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.468102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.468111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.181 [2024-07-15 18:49:15.468122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.470046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.181 [2024-07-15 18:49:15.470100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.470114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.181 [2024-07-15 18:49:15.470122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.470134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.470146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.470154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.470163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.181 [2024-07-15 18:49:15.470173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.478028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.181 [2024-07-15 18:49:15.478085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.478098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.181 [2024-07-15 18:49:15.478107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.478119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.478131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.478139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.478148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.181 [2024-07-15 18:49:15.478159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.480082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.181 [2024-07-15 18:49:15.480135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.480148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.181 [2024-07-15 18:49:15.480156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.480168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.480179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.480188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.480196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.181 [2024-07-15 18:49:15.480207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.488064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.181 [2024-07-15 18:49:15.488120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.488133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e0350 with addr=10.0.0.2, port=4420 00:23:41.181 [2024-07-15 18:49:15.488141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0350 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.488154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0350 (9): Bad file descriptor 00:23:41.181 [2024-07-15 18:49:15.488166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:41.181 [2024-07-15 18:49:15.488174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:41.181 [2024-07-15 18:49:15.488182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:41.181 [2024-07-15 18:49:15.488193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.181 [2024-07-15 18:49:15.490116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:41.181 [2024-07-15 18:49:15.490169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.181 [2024-07-15 18:49:15.490182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1899230 with addr=10.0.0.3, port=4420 00:23:41.181 [2024-07-15 18:49:15.490191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899230 is same with the state(5) to be set 00:23:41.181 [2024-07-15 18:49:15.490202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1899230 (9): Bad file descriptor 00:23:41.182 [2024-07-15 18:49:15.490215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:41.182 [2024-07-15 18:49:15.490223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:41.182 [2024-07-15 18:49:15.490231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:41.182 [2024-07-15 18:49:15.490256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.182 [2024-07-15 18:49:15.496246] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:41.182 [2024-07-15 18:49:15.496269] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:41.182 [2024-07-15 18:49:15.496295] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:41.182 [2024-07-15 18:49:15.496322] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:41.182 [2024-07-15 18:49:15.496335] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:41.182 [2024-07-15 18:49:15.496345] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.182 [2024-07-15 18:49:15.582313] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:41.182 [2024-07-15 18:49:15.582368] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:42.117 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.375 [2024-07-15 18:49:16.656437] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.375 18:49:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.310 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.569 [2024-07-15 18:49:17.880497] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:43.569 2024/07/15 18:49:17 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:43.569 request: 00:23:43.569 { 00:23:43.569 "method": "bdev_nvme_start_mdns_discovery", 00:23:43.569 "params": { 00:23:43.569 "name": "mdns", 00:23:43.569 "svcname": "_nvme-disc._http", 00:23:43.569 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:43.569 } 00:23:43.569 } 00:23:43.569 Got JSON-RPC error response 00:23:43.569 GoRPCClient: error on JSON-RPC call 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.569 18:49:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:44.135 [2024-07-15 18:49:18.469036] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:44.135 [2024-07-15 18:49:18.569028] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:44.394 [2024-07-15 18:49:18.669041] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:44.394 [2024-07-15 18:49:18.669068] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:44.394 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:44.394 cookie is 0 00:23:44.394 is_local: 1 00:23:44.394 our_own: 0 00:23:44.394 wide_area: 0 00:23:44.394 multicast: 1 00:23:44.394 cached: 1 00:23:44.394 [2024-07-15 18:49:18.769045] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:44.394 [2024-07-15 18:49:18.769072] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:44.394 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:44.394 cookie is 0 00:23:44.394 is_local: 1 00:23:44.394 our_own: 0 00:23:44.394 wide_area: 0 00:23:44.394 multicast: 1 00:23:44.394 cached: 1 00:23:44.394 [2024-07-15 18:49:18.769084] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:23:44.394 [2024-07-15 18:49:18.869041] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:44.394 [2024-07-15 18:49:18.869066] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:44.394 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:44.394 cookie is 0 00:23:44.394 is_local: 1 00:23:44.394 our_own: 0 00:23:44.394 wide_area: 0 00:23:44.394 multicast: 1 00:23:44.394 cached: 1 00:23:44.653 [2024-07-15 18:49:18.969048] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:44.653 [2024-07-15 18:49:18.969075] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:44.653 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:44.653 cookie is 0 00:23:44.653 is_local: 1 00:23:44.653 our_own: 0 00:23:44.653 wide_area: 0 00:23:44.653 multicast: 1 00:23:44.653 cached: 1 00:23:44.653 [2024-07-15 18:49:18.969085] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:23:45.219 [2024-07-15 18:49:19.677956] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:45.219 [2024-07-15 18:49:19.677993] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:45.219 [2024-07-15 18:49:19.678007] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:45.478 [2024-07-15 18:49:19.765059] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:45.478 [2024-07-15 18:49:19.825126] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:45.478 [2024-07-15 18:49:19.825169] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:45.478 [2024-07-15 18:49:19.877791] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:45.478 [2024-07-15 18:49:19.877817] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:45.478 [2024-07-15 18:49:19.877830] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:45.736 [2024-07-15 18:49:19.963883] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:45.736 [2024-07-15 18:49:20.023999] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:45.736 [2024-07-15 18:49:20.024039] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:49.028 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:49.029 18:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 [2024-07-15 18:49:23.086336] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:49.029 2024/07/15 18:49:23 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:49.029 request: 00:23:49.029 { 00:23:49.029 "method": "bdev_nvme_start_mdns_discovery", 00:23:49.029 "params": { 00:23:49.029 "name": "cdc", 00:23:49.029 "svcname": "_nvme-disc._tcp", 00:23:49.029 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:49.029 } 00:23:49.029 } 00:23:49.029 Got JSON-RPC error response 00:23:49.029 GoRPCClient: error on JSON-RPC call 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94707 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94707 00:23:49.029 [2024-07-15 18:49:23.316096] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94732 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:23:49.029 Got SIGTERM, quitting. 00:23:49.029 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:49.029 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:49.029 avahi-daemon 0.8 exiting. 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.029 rmmod nvme_tcp 00:23:49.029 rmmod nvme_fabrics 00:23:49.029 rmmod nvme_keyring 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94646 ']' 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94646 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94646 ']' 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94646 00:23:49.029 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94646 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.287 killing process with pid 94646 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94646' 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94646 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94646 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.287 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:49.546 00:23:49.546 real 0m20.570s 00:23:49.546 user 0m39.533s 00:23:49.546 sys 0m2.801s 00:23:49.546 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.546 18:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.546 ************************************ 00:23:49.546 END TEST nvmf_mdns_discovery 00:23:49.546 ************************************ 00:23:49.546 18:49:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:49.546 18:49:23 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:23:49.546 18:49:23 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:49.546 18:49:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:49.546 18:49:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.546 18:49:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.546 ************************************ 00:23:49.546 START TEST nvmf_host_multipath 00:23:49.546 ************************************ 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:49.546 * Looking for test storage... 00:23:49.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:49.546 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:49.547 Cannot find device "nvmf_tgt_br" 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:23:49.547 18:49:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:49.547 Cannot find device "nvmf_tgt_br2" 00:23:49.547 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:23:49.547 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:49.547 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:49.547 Cannot find device "nvmf_tgt_br" 00:23:49.547 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:23:49.547 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:49.806 Cannot find device "nvmf_tgt_br2" 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:49.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:49.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:49.806 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:50.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:23:50.065 00:23:50.065 --- 10.0.0.2 ping statistics --- 00:23:50.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.065 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:50.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:50.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:23:50.065 00:23:50.065 --- 10.0.0.3 ping statistics --- 00:23:50.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.065 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:50.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:50.065 00:23:50.065 --- 10.0.0.1 ping statistics --- 00:23:50.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.065 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=95290 00:23:50.065 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 95290 00:23:50.066 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95290 ']' 00:23:50.066 18:49:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:50.066 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.066 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.066 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.066 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.066 18:49:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:50.066 [2024-07-15 18:49:24.392106] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:50.066 [2024-07-15 18:49:24.392213] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.066 [2024-07-15 18:49:24.527333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:50.323 [2024-07-15 18:49:24.625636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.323 [2024-07-15 18:49:24.625690] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.323 [2024-07-15 18:49:24.625700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.323 [2024-07-15 18:49:24.625710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.323 [2024-07-15 18:49:24.625717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.323 [2024-07-15 18:49:24.625896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.323 [2024-07-15 18:49:24.625900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95290 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:51.257 [2024-07-15 18:49:25.688448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.257 18:49:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:51.515 Malloc0 00:23:51.515 18:49:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:51.773 18:49:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.031 18:49:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.031 [2024-07-15 18:49:26.471035] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.031 18:49:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:52.290 [2024-07-15 18:49:26.655146] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95394 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95394 /var/tmp/bdevperf.sock 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95394 ']' 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.290 18:49:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:53.225 18:49:27 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.225 18:49:27 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:23:53.225 18:49:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:53.483 18:49:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:54.050 Nvme0n1 00:23:54.050 18:49:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:54.308 Nvme0n1 00:23:54.308 18:49:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:23:54.308 18:49:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:55.243 18:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:55.243 18:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.502 18:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.502 18:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:55.502 18:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95290 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:55.502 18:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95481 00:23:55.502 18:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:02.088 18:49:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:02.088 18:49:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:02.088 Attaching 4 probes... 00:24:02.088 @path[10.0.0.2, 4421]: 21991 00:24:02.088 @path[10.0.0.2, 4421]: 22751 00:24:02.088 @path[10.0.0.2, 4421]: 22110 00:24:02.088 @path[10.0.0.2, 4421]: 21972 00:24:02.088 @path[10.0.0.2, 4421]: 22138 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95481 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:02.088 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.347 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:02.347 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:02.347 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95613 00:24:02.347 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95290 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:02.347 18:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:08.953 18:49:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:08.953 18:49:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.953 Attaching 4 probes... 00:24:08.953 @path[10.0.0.2, 4420]: 21352 00:24:08.953 @path[10.0.0.2, 4420]: 21354 00:24:08.953 @path[10.0.0.2, 4420]: 22079 00:24:08.953 @path[10.0.0.2, 4420]: 22287 00:24:08.953 @path[10.0.0.2, 4420]: 22536 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95613 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:08.953 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.214 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:09.214 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95744 00:24:09.214 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95290 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:09.214 18:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.768 Attaching 4 probes... 00:24:15.768 @path[10.0.0.2, 4421]: 17470 00:24:15.768 @path[10.0.0.2, 4421]: 21336 00:24:15.768 @path[10.0.0.2, 4421]: 21710 00:24:15.768 @path[10.0.0.2, 4421]: 21948 00:24:15.768 @path[10.0.0.2, 4421]: 22083 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95744 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:15.768 18:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:15.768 18:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:16.026 18:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:16.026 18:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95290 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:16.026 18:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95879 00:24:16.026 18:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.580 Attaching 4 probes... 00:24:22.580 00:24:22.580 00:24:22.580 00:24:22.580 00:24:22.580 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95879 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:22.580 18:49:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:22.837 18:49:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:22.837 18:49:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96007 00:24:22.837 18:49:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95290 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:22.837 18:49:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:29.414 Attaching 4 probes... 00:24:29.414 @path[10.0.0.2, 4421]: 21606 00:24:29.414 @path[10.0.0.2, 4421]: 21749 00:24:29.414 @path[10.0.0.2, 4421]: 21561 00:24:29.414 @path[10.0.0.2, 4421]: 21753 00:24:29.414 @path[10.0.0.2, 4421]: 21322 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96007 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:29.414 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:29.414 [2024-07-15 18:50:03.692084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.414 [2024-07-15 18:50:03.692373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 [2024-07-15 18:50:03.692593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2412440 is same with the state(5) to be set 00:24:29.415 18:50:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:24:30.347 18:50:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:30.347 18:50:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96143 00:24:30.347 18:50:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:30.347 18:50:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95290 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:36.899 18:50:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:36.899 18:50:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.899 Attaching 4 probes... 00:24:36.899 @path[10.0.0.2, 4420]: 21147 00:24:36.899 @path[10.0.0.2, 4420]: 21583 00:24:36.899 @path[10.0.0.2, 4420]: 21779 00:24:36.899 @path[10.0.0.2, 4420]: 21886 00:24:36.899 @path[10.0.0.2, 4420]: 21614 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96143 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.899 [2024-07-15 18:50:11.261553] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.899 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:37.175 18:50:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:24:43.756 18:50:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:43.756 18:50:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96335 00:24:43.756 18:50:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95290 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:43.756 18:50:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:50.332 Attaching 4 probes... 00:24:50.332 @path[10.0.0.2, 4421]: 20868 00:24:50.332 @path[10.0.0.2, 4421]: 21119 00:24:50.332 @path[10.0.0.2, 4421]: 21467 00:24:50.332 @path[10.0.0.2, 4421]: 21657 00:24:50.332 @path[10.0.0.2, 4421]: 21845 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96335 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95394 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95394 ']' 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95394 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95394 00:24:50.332 killing process with pid 95394 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95394' 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95394 00:24:50.332 18:50:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95394 00:24:50.332 Connection closed with partial response: 00:24:50.332 00:24:50.332 00:24:50.332 18:50:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95394 00:24:50.332 18:50:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:50.332 [2024-07-15 18:49:26.723610] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:24:50.332 [2024-07-15 18:49:26.723731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95394 ] 00:24:50.332 [2024-07-15 18:49:26.859716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.332 [2024-07-15 18:49:26.950198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.332 Running I/O for 90 seconds... 00:24:50.332 [2024-07-15 18:49:36.775176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.775544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.775558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.332 [2024-07-15 18:49:36.776832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.332 [2024-07-15 18:49:36.776852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.776867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.776887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.776901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.776921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.776936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.776971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.776985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.777972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.777994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.333 [2024-07-15 18:49:36.778814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.333 [2024-07-15 18:49:36.778834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.334 [2024-07-15 18:49:36.778848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.778873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.334 [2024-07-15 18:49:36.778887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.778908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.334 [2024-07-15 18:49:36.778922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.778942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.334 [2024-07-15 18:49:36.778966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.778987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.334 [2024-07-15 18:49:36.779001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.334 [2024-07-15 18:49:36.779035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.779977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.779991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.334 [2024-07-15 18:49:36.780304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.334 [2024-07-15 18:49:36.780324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:36.780966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:36.780981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.320665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.320678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-07-15 18:49:43.321757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.335 [2024-07-15 18:49:43.321777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.321791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.321811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.321825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.321845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.321859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.321879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.321893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.321913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.321927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.321956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.321971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.321990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.322004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.322047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.322081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-07-15 18:49:43.322115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.322957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.322972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.323001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.323017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.336 [2024-07-15 18:49:43.323038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-07-15 18:49:43.323054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.323981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.323996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.324956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.324988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.325003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.325024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.325039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.325061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.325076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.325098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.325114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.325135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.325151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.325172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.325187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.337 [2024-07-15 18:49:43.325209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-07-15 18:49:43.325224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.325261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.325298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.325335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.325376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.325968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.325983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.326017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.326056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.326090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.326125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.326159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.326193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-07-15 18:49:43.326228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.326262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.326303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.326337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.326371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.326406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.326440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.326461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.326475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.327112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.327146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.327178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.327212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.327243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.327275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.338 [2024-07-15 18:49:43.327314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.338 [2024-07-15 18:49:43.327332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.327345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.327363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.327377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.327395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.327408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.327428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.327442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.327460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.327473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.327492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-07-15 18:49:43.339679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.339966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.339987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.339 [2024-07-15 18:49:43.340774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.339 [2024-07-15 18:49:43.340794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.340822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.340842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.340870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.340890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.340918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.340938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.340980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.341000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.341028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.341048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.341075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.341095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.341130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.341150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.341178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.341198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.341226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.341246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.342963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.342992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.340 [2024-07-15 18:49:43.343483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-07-15 18:49:43.343502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.343939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.344000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.344048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.344096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.344958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.344979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.345033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.345081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.345129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.345177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.341 [2024-07-15 18:49:43.345225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.345276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.345323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.345371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.345419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-07-15 18:49:43.345466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.341 [2024-07-15 18:49:43.345505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.345526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.346988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.342 [2024-07-15 18:49:43.347820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.347959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.347980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-07-15 18:49:43.348466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.342 [2024-07-15 18:49:43.348495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.348965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.348993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.349013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.349041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.349061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.349089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.349108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.349137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.349156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.349184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.349204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.349232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.349252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.349280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.349300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-07-15 18:49:43.350856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.343 [2024-07-15 18:49:43.350874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.350887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.350906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.350918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.350937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.350950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.350979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.350993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-07-15 18:49:43.351372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.351409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.351440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.351503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.351534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.351552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.351565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.344 [2024-07-15 18:49:43.361795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-07-15 18:49:43.361808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.361826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.361839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.361857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.361870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.361888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.361902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.361920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.361939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.361970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.361982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.362001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.362014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.362033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.362060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.363960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.363983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-07-15 18:49:43.364729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.364990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.365013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.345 [2024-07-15 18:49:43.365044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-07-15 18:49:43.365065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.365972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.365994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.366025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.366047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.366078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.366100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.366130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.366152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.366183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.366205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.366235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.366257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.366288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.367967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.367998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.368020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.368050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.346 [2024-07-15 18:49:43.368072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.346 [2024-07-15 18:49:43.368103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.368962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.368985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.347 [2024-07-15 18:49:43.369464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.369940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.369973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.370004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.370026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.370057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.370078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.370109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.370131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.370162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.370183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.370214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.370236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.347 [2024-07-15 18:49:43.370267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.347 [2024-07-15 18:49:43.370288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.370717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.370770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.370822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.370875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.370906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.370928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.371926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.371977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.372965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.372996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.373019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.373049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.348 [2024-07-15 18:49:43.373071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.348 [2024-07-15 18:49:43.373102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.348 [2024-07-15 18:49:43.373124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.349 [2024-07-15 18:49:43.373568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.373965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.373983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.374504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.374517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.375062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.375084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.375105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.375118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.375136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.375149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.349 [2024-07-15 18:49:43.375167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.349 [2024-07-15 18:49:43.375180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.375970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.375983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.350 [2024-07-15 18:49:43.376424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.350 [2024-07-15 18:49:43.376459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.350 [2024-07-15 18:49:43.376478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.376978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.376996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.377010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.377041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.377072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.377104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.377135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.351 [2024-07-15 18:49:43.377168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.377199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.377231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.377267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.377891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.377925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.377969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.377987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.351 [2024-07-15 18:49:43.378416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.351 [2024-07-15 18:49:43.378434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.352 [2024-07-15 18:49:43.378856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.378977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.378990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.379009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.379027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.379045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.379058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.386977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.386999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.352 [2024-07-15 18:49:43.387481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.352 [2024-07-15 18:49:43.387749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.387773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.387816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.387833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.387868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.387884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.387911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.387927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.387968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.388960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.388976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.353 [2024-07-15 18:49:43.389339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.353 [2024-07-15 18:49:43.389356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.389398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.389445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.389501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.389544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.389586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.389629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.389670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.389712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.389755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.389797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.389840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.389882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.389960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.389982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.354 [2024-07-15 18:49:43.390668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.390710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.390753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:43.390922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:43.390939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.354 [2024-07-15 18:49:50.330828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.354 [2024-07-15 18:49:50.330842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.330861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.330875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.330895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.330909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.330929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.330943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.330976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.330990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.355 [2024-07-15 18:49:50.331269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.331963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.331985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.355 [2024-07-15 18:49:50.332473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.355 [2024-07-15 18:49:50.332486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.332519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.332553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.332586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.332970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.332993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.333407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.356 [2024-07-15 18:49:50.333682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.333718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.333755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.333976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.333999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.334026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.334041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.334067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.334082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.334107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.334121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.334148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.334162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.334188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.334212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.334238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.334252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.356 [2024-07-15 18:49:50.334278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.356 [2024-07-15 18:49:50.334292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.334976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.334990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.357 [2024-07-15 18:49:50.335417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.357 [2024-07-15 18:49:50.335728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.357 [2024-07-15 18:49:50.335745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.692733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.692780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.692829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.692844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.692865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.692878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.692898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.692912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.692932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.692956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.692977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.692990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.693024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.693058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.693378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.693408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.693437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.358 [2024-07-15 18:50:03.693499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.693971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.693985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.358 [2024-07-15 18:50:03.694372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.358 [2024-07-15 18:50:03.694388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.694984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.694997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-07-15 18:50:03.695672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.359 [2024-07-15 18:50:03.695686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.695979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.695993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.360 [2024-07-15 18:50:03.696264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.360 [2024-07-15 18:50:03.696836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.360 [2024-07-15 18:50:03.696850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.696865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.361 [2024-07-15 18:50:03.696878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.696894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.361 [2024-07-15 18:50:03.696907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.696923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.361 [2024-07-15 18:50:03.696936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.696966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.361 [2024-07-15 18:50:03.696980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.696996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.361 [2024-07-15 18:50:03.697009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.697024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.361 [2024-07-15 18:50:03.697039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.697247] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2141500 was disconnected and freed. reset controller. 00:24:50.361 [2024-07-15 18:50:03.698326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.361 [2024-07-15 18:50:03.698395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.361 [2024-07-15 18:50:03.698414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.361 [2024-07-15 18:50:03.698445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d4d0 (9): Bad file descriptor 00:24:50.361 [2024-07-15 18:50:03.698543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.361 [2024-07-15 18:50:03.698563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230d4d0 with addr=10.0.0.2, port=4421 00:24:50.361 [2024-07-15 18:50:03.698579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230d4d0 is same with the state(5) to be set 00:24:50.361 [2024-07-15 18:50:03.698600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d4d0 (9): Bad file descriptor 00:24:50.361 [2024-07-15 18:50:03.698631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.361 [2024-07-15 18:50:03.698649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.361 [2024-07-15 18:50:03.698665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.361 [2024-07-15 18:50:03.698687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.361 [2024-07-15 18:50:03.698698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.361 [2024-07-15 18:50:13.742506] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.361 Received shutdown signal, test time was about 55.178784 seconds 00:24:50.361 00:24:50.361 Latency(us) 00:24:50.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.361 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:50.361 Verification LBA range: start 0x0 length 0x4000 00:24:50.361 Nvme0n1 : 55.18 9331.43 36.45 0.00 0.00 13696.95 1513.57 7030452.42 00:24:50.361 =================================================================================================================== 00:24:50.361 Total : 9331.43 36.45 0.00 0.00 13696.95 1513.57 7030452.42 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.361 rmmod nvme_tcp 00:24:50.361 rmmod nvme_fabrics 00:24:50.361 rmmod nvme_keyring 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 95290 ']' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 95290 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95290 ']' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95290 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95290 00:24:50.361 killing process with pid 95290 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95290' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95290 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95290 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:50.361 00:24:50.361 real 1m0.864s 00:24:50.361 user 2m49.038s 00:24:50.361 sys 0m17.097s 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.361 18:50:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:50.361 ************************************ 00:24:50.361 END TEST nvmf_host_multipath 00:24:50.361 ************************************ 00:24:50.361 18:50:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.361 18:50:24 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:50.361 18:50:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.361 18:50:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.361 18:50:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.361 ************************************ 00:24:50.361 START TEST nvmf_timeout 00:24:50.361 ************************************ 00:24:50.361 18:50:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:50.619 * Looking for test storage... 00:24:50.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:50.619 Cannot find device "nvmf_tgt_br" 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.619 Cannot find device "nvmf_tgt_br2" 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:24:50.619 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:50.620 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:50.620 Cannot find device "nvmf_tgt_br" 00:24:50.620 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:24:50.620 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:50.620 Cannot find device "nvmf_tgt_br2" 00:24:50.620 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:24:50.620 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:50.620 18:50:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:50.620 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:50.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:24:50.881 00:24:50.881 --- 10.0.0.2 ping statistics --- 00:24:50.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.881 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:50.881 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:50.881 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:24:50.881 00:24:50.881 --- 10.0.0.3 ping statistics --- 00:24:50.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.881 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:50.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:50.881 00:24:50.881 --- 10.0.0.1 ping statistics --- 00:24:50.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.881 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:50.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96661 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96661 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96661 ']' 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.881 18:50:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:50.881 [2024-07-15 18:50:25.318124] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:24:50.881 [2024-07-15 18:50:25.318807] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.150 [2024-07-15 18:50:25.463432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.150 [2024-07-15 18:50:25.549350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.150 [2024-07-15 18:50:25.549411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.150 [2024-07-15 18:50:25.549421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.150 [2024-07-15 18:50:25.549429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.150 [2024-07-15 18:50:25.549437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.150 [2024-07-15 18:50:25.549917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.150 [2024-07-15 18:50:25.549918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:52.082 18:50:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:52.082 [2024-07-15 18:50:26.562081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.347 18:50:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:52.347 Malloc0 00:24:52.347 18:50:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.911 18:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.911 18:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.168 [2024-07-15 18:50:27.482681] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96748 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96748 /var/tmp/bdevperf.sock 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96748 ']' 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.168 18:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.168 [2024-07-15 18:50:27.549796] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:24:53.168 [2024-07-15 18:50:27.549897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96748 ] 00:24:53.425 [2024-07-15 18:50:27.693646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.425 [2024-07-15 18:50:27.804762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.355 18:50:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.355 18:50:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:24:54.355 18:50:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:54.355 18:50:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:54.612 NVMe0n1 00:24:54.612 18:50:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96801 00:24:54.612 18:50:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.612 18:50:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:24:54.869 Running I/O for 10 seconds... 00:24:55.799 18:50:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.060 [2024-07-15 18:50:30.319880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.319930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.319940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.319959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.319967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.319975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.319984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.319992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e900 is same with the state(5) to be set 00:24:56.060 [2024-07-15 18:50:30.320934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.060 [2024-07-15 18:50:30.320975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.320987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.061 [2024-07-15 18:50:30.320996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.061 [2024-07-15 18:50:30.321015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.061 [2024-07-15 18:50:30.321033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4240 is same with the state(5) to be set 00:24:56.061 [2024-07-15 18:50:30.321078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.321986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.321996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.061 [2024-07-15 18:50:30.322540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.061 [2024-07-15 18:50:30.322548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.062 [2024-07-15 18:50:30.322568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.062 [2024-07-15 18:50:30.322586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.062 [2024-07-15 18:50:30.322605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.062 [2024-07-15 18:50:30.322623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.062 [2024-07-15 18:50:30.322926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.322987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.322999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.062 [2024-07-15 18:50:30.323544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.062 [2024-07-15 18:50:30.323573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.062 [2024-07-15 18:50:30.323581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101352 len:8 PRP1 0x0 PRP2 0x0 00:24:56.062 [2024-07-15 18:50:30.323589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.062 [2024-07-15 18:50:30.323635] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18318d0 was disconnected and freed. reset controller. 00:24:56.062 [2024-07-15 18:50:30.323823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.062 [2024-07-15 18:50:30.323851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4240 (9): Bad file descriptor 00:24:56.062 [2024-07-15 18:50:30.323936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.062 [2024-07-15 18:50:30.323963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c4240 with addr=10.0.0.2, port=4420 00:24:56.062 [2024-07-15 18:50:30.323973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4240 is same with the state(5) to be set 00:24:56.062 [2024-07-15 18:50:30.323988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4240 (9): Bad file descriptor 00:24:56.062 [2024-07-15 18:50:30.324001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.062 [2024-07-15 18:50:30.324010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.062 [2024-07-15 18:50:30.324020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.062 [2024-07-15 18:50:30.324036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.062 [2024-07-15 18:50:30.324045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.062 18:50:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:24:57.958 [2024-07-15 18:50:32.336131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-07-15 18:50:32.336197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c4240 with addr=10.0.0.2, port=4420 00:24:57.958 [2024-07-15 18:50:32.336211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4240 is same with the state(5) to be set 00:24:57.958 [2024-07-15 18:50:32.336235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4240 (9): Bad file descriptor 00:24:57.959 [2024-07-15 18:50:32.336252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.959 [2024-07-15 18:50:32.336261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.959 [2024-07-15 18:50:32.336273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.959 [2024-07-15 18:50:32.336296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.959 [2024-07-15 18:50:32.336305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.959 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:24:57.959 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.959 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:58.216 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:58.216 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:24:58.216 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:58.216 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:58.474 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:58.474 18:50:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:24:59.891 [2024-07-15 18:50:34.336468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.891 [2024-07-15 18:50:34.336534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c4240 with addr=10.0.0.2, port=4420 00:24:59.891 [2024-07-15 18:50:34.336550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4240 is same with the state(5) to be set 00:24:59.891 [2024-07-15 18:50:34.336576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c4240 (9): Bad file descriptor 00:24:59.891 [2024-07-15 18:50:34.336593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.891 [2024-07-15 18:50:34.336603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.891 [2024-07-15 18:50:34.336615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.891 [2024-07-15 18:50:34.336640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.891 [2024-07-15 18:50:34.336651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.418 [2024-07-15 18:50:36.336702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.418 [2024-07-15 18:50:36.336751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.418 [2024-07-15 18:50:36.336762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.418 [2024-07-15 18:50:36.336772] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:02.418 [2024-07-15 18:50:36.336795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.986 00:25:02.986 Latency(us) 00:25:02.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.986 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.986 Verification LBA range: start 0x0 length 0x4000 00:25:02.986 NVMe0n1 : 8.15 1539.19 6.01 15.71 0.00 82346.04 1638.40 7030452.42 00:25:02.986 =================================================================================================================== 00:25:02.986 Total : 1539.19 6.01 15.71 0.00 82346.04 1638.40 7030452.42 00:25:02.986 0 00:25:03.553 18:50:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:25:03.553 18:50:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.553 18:50:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:03.827 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:03.827 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:25:03.827 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:03.827 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96801 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96748 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96748 ']' 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96748 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96748 00:25:04.117 killing process with pid 96748 00:25:04.117 Received shutdown signal, test time was about 9.236834 seconds 00:25:04.117 00:25:04.117 Latency(us) 00:25:04.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.117 =================================================================================================================== 00:25:04.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96748' 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96748 00:25:04.117 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96748 00:25:04.376 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.376 [2024-07-15 18:50:38.855065] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96952 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96952 /var/tmp/bdevperf.sock 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96952 ']' 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.652 18:50:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.652 [2024-07-15 18:50:38.921949] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:04.652 [2024-07-15 18:50:38.922038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96952 ] 00:25:04.652 [2024-07-15 18:50:39.058729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.911 [2024-07-15 18:50:39.155766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.478 18:50:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.478 18:50:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:05.478 18:50:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:05.737 18:50:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:06.304 NVMe0n1 00:25:06.304 18:50:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97001 00:25:06.304 18:50:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.304 18:50:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:25:06.304 Running I/O for 10 seconds... 00:25:07.237 18:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.497 [2024-07-15 18:50:41.803633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.803996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.497 [2024-07-15 18:50:41.804303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cb50 is same with the state(5) to be set 00:25:07.498 [2024-07-15 18:50:41.804895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.804938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.804973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.804999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.498 [2024-07-15 18:50:41.805655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.498 [2024-07-15 18:50:41.805871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.498 [2024-07-15 18:50:41.805881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.805893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.805903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.805914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.805924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.805936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.805954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.805966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.805976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.805988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.805998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.499 [2024-07-15 18:50:41.806434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.499 [2024-07-15 18:50:41.806814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.499 [2024-07-15 18:50:41.806824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.806846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.806867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.806888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.806910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.806931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.806961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.806983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.806994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.500 [2024-07-15 18:50:41.807618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.500 [2024-07-15 18:50:41.807639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.500 [2024-07-15 18:50:41.807661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.500 [2024-07-15 18:50:41.807682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.500 [2024-07-15 18:50:41.807704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.500 [2024-07-15 18:50:41.807726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.500 [2024-07-15 18:50:41.807748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.500 [2024-07-15 18:50:41.807774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.500 [2024-07-15 18:50:41.807783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.501 [2024-07-15 18:50:41.807792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94072 len:8 PRP1 0x0 PRP2 0x0 00:25:07.501 [2024-07-15 18:50:41.807802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.501 [2024-07-15 18:50:41.807869] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c78d0 was disconnected and freed. reset controller. 00:25:07.501 [2024-07-15 18:50:41.808112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.501 [2024-07-15 18:50:41.808193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:07.501 [2024-07-15 18:50:41.808307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.501 [2024-07-15 18:50:41.808336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a240 with addr=10.0.0.2, port=4420 00:25:07.501 [2024-07-15 18:50:41.808347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a240 is same with the state(5) to be set 00:25:07.501 [2024-07-15 18:50:41.808364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:07.501 [2024-07-15 18:50:41.808379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.501 [2024-07-15 18:50:41.808390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.501 [2024-07-15 18:50:41.808401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.501 [2024-07-15 18:50:41.808420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.501 [2024-07-15 18:50:41.808430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.501 18:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:25:08.432 [2024-07-15 18:50:42.808573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.432 [2024-07-15 18:50:42.808641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a240 with addr=10.0.0.2, port=4420 00:25:08.432 [2024-07-15 18:50:42.808656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a240 is same with the state(5) to be set 00:25:08.432 [2024-07-15 18:50:42.808681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:08.432 [2024-07-15 18:50:42.808699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:08.432 [2024-07-15 18:50:42.808710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:08.432 [2024-07-15 18:50:42.808723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.432 [2024-07-15 18:50:42.808746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.432 [2024-07-15 18:50:42.808756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.432 18:50:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.689 [2024-07-15 18:50:43.137720] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.689 18:50:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 97001 00:25:09.621 [2024-07-15 18:50:43.826557] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:16.219 00:25:16.219 Latency(us) 00:25:16.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.219 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:16.219 Verification LBA range: start 0x0 length 0x4000 00:25:16.219 NVMe0n1 : 10.01 6892.19 26.92 0.00 0.00 18543.25 1786.64 3019898.88 00:25:16.219 =================================================================================================================== 00:25:16.219 Total : 6892.19 26.92 0.00 0.00 18543.25 1786.64 3019898.88 00:25:16.219 0 00:25:16.476 18:50:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97118 00:25:16.476 18:50:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:25:16.476 18:50:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.476 Running I/O for 10 seconds... 00:25:17.409 18:50:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.667 [2024-07-15 18:50:52.006560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.006975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175660 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.008081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.667 [2024-07-15 18:50:52.008121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.667 [2024-07-15 18:50:52.008134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.667 [2024-07-15 18:50:52.008144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.667 [2024-07-15 18:50:52.008155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.667 [2024-07-15 18:50:52.008164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.667 [2024-07-15 18:50:52.008175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.667 [2024-07-15 18:50:52.008184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.667 [2024-07-15 18:50:52.008193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a240 is same with the state(5) to be set 00:25:17.667 [2024-07-15 18:50:52.008268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.667 [2024-07-15 18:50:52.008279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.667 [2024-07-15 18:50:52.008296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.667 [2024-07-15 18:50:52.008305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.668 [2024-07-15 18:50:52.008612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.008979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.008989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.668 [2024-07-15 18:50:52.009167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.668 [2024-07-15 18:50:52.009176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.669 [2024-07-15 18:50:52.009939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.669 [2024-07-15 18:50:52.009967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.669 [2024-07-15 18:50:52.009987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.009998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.669 [2024-07-15 18:50:52.010007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.010018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.669 [2024-07-15 18:50:52.010027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.010039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.669 [2024-07-15 18:50:52.010049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.669 [2024-07-15 18:50:52.010060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.670 [2024-07-15 18:50:52.010884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.670 [2024-07-15 18:50:52.010903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.671 [2024-07-15 18:50:52.010911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.671 [2024-07-15 18:50:52.010918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105280 len:8 PRP1 0x0 PRP2 0x0 00:25:17.671 [2024-07-15 18:50:52.010927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.671 [2024-07-15 18:50:52.010980] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c9670 was disconnected and freed. reset controller. 00:25:17.671 [2024-07-15 18:50:52.011159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.671 [2024-07-15 18:50:52.011175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:17.671 [2024-07-15 18:50:52.011257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.671 [2024-07-15 18:50:52.011277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a240 with addr=10.0.0.2, port=4420 00:25:17.671 [2024-07-15 18:50:52.011287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a240 is same with the state(5) to be set 00:25:17.671 [2024-07-15 18:50:52.011301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:17.671 [2024-07-15 18:50:52.011314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.671 [2024-07-15 18:50:52.011324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.671 [2024-07-15 18:50:52.011334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.671 [2024-07-15 18:50:52.011350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.671 [2024-07-15 18:50:52.011359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.671 18:50:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:25:18.597 [2024-07-15 18:50:53.022927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.597 [2024-07-15 18:50:53.023003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a240 with addr=10.0.0.2, port=4420 00:25:18.597 [2024-07-15 18:50:53.023018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a240 is same with the state(5) to be set 00:25:18.597 [2024-07-15 18:50:53.023045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:18.597 [2024-07-15 18:50:53.023063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.597 [2024-07-15 18:50:53.023072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.597 [2024-07-15 18:50:53.023084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.597 [2024-07-15 18:50:53.023108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.597 [2024-07-15 18:50:53.023118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.963 [2024-07-15 18:50:54.023290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 18:50:54.023356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a240 with addr=10.0.0.2, port=4420 00:25:19.963 [2024-07-15 18:50:54.023389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a240 is same with the state(5) to be set 00:25:19.963 [2024-07-15 18:50:54.023416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:19.963 [2024-07-15 18:50:54.023435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.963 [2024-07-15 18:50:54.023445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.963 [2024-07-15 18:50:54.023457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.963 [2024-07-15 18:50:54.023482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.963 [2024-07-15 18:50:54.023493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.917 [2024-07-15 18:50:55.023887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.917 [2024-07-15 18:50:55.023971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a240 with addr=10.0.0.2, port=4420 00:25:20.917 [2024-07-15 18:50:55.023987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a240 is same with the state(5) to be set 00:25:20.917 [2024-07-15 18:50:55.024205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a240 (9): Bad file descriptor 00:25:20.917 [2024-07-15 18:50:55.024431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.917 [2024-07-15 18:50:55.024452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.917 [2024-07-15 18:50:55.024465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.917 [2024-07-15 18:50:55.027678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.917 [2024-07-15 18:50:55.027709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.917 18:50:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.917 [2024-07-15 18:50:55.295157] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.917 18:50:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 97118 00:25:21.864 [2024-07-15 18:50:56.060125] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.147 00:25:27.147 Latency(us) 00:25:27.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.147 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:27.147 Verification LBA range: start 0x0 length 0x4000 00:25:27.147 NVMe0n1 : 10.00 6095.65 23.81 4426.53 0.00 12144.20 565.64 3019898.88 00:25:27.147 =================================================================================================================== 00:25:27.147 Total : 6095.65 23.81 4426.53 0.00 12144.20 0.00 3019898.88 00:25:27.147 0 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96952 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96952 ']' 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96952 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96952 00:25:27.147 killing process with pid 96952 00:25:27.147 Received shutdown signal, test time was about 10.000000 seconds 00:25:27.147 00:25:27.147 Latency(us) 00:25:27.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.147 =================================================================================================================== 00:25:27.147 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96952' 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96952 00:25:27.147 18:51:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96952 00:25:27.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97243 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97243 /var/tmp/bdevperf.sock 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 97243 ']' 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.147 18:51:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:27.147 [2024-07-15 18:51:01.163576] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:27.147 [2024-07-15 18:51:01.163697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97243 ] 00:25:27.147 [2024-07-15 18:51:01.308941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.147 [2024-07-15 18:51:01.409731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.714 18:51:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.714 18:51:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:27.714 18:51:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97243 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:27.714 18:51:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97267 00:25:27.714 18:51:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:27.972 18:51:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:28.230 NVMe0n1 00:25:28.230 18:51:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:28.230 18:51:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97320 00:25:28.230 18:51:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:25:28.230 Running I/O for 10 seconds... 00:25:29.162 18:51:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.444 [2024-07-15 18:51:03.885627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.444 [2024-07-15 18:51:03.885751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.885992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.445 [2024-07-15 18:51:03.886584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.886824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178870 is same with the state(5) to be set 00:25:29.446 [2024-07-15 18:51:03.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.446 [2024-07-15 18:51:03.887787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.446 [2024-07-15 18:51:03.887796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.887989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.887998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.447 [2024-07-15 18:51:03.888652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.447 [2024-07-15 18:51:03.888661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.888988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.888999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.448 [2024-07-15 18:51:03.889535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.448 [2024-07-15 18:51:03.889563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.449 [2024-07-15 18:51:03.889904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.889932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:29.449 [2024-07-15 18:51:03.889941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:29.449 [2024-07-15 18:51:03.889950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3752 len:8 PRP1 0x0 PRP2 0x0 00:25:29.449 [2024-07-15 18:51:03.889969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.449 [2024-07-15 18:51:03.890020] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xef28d0 was disconnected and freed. reset controller. 00:25:29.449 [2024-07-15 18:51:03.890261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.449 [2024-07-15 18:51:03.890329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe85240 (9): Bad file descriptor 00:25:29.449 [2024-07-15 18:51:03.890424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.449 [2024-07-15 18:51:03.890442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe85240 with addr=10.0.0.2, port=4420 00:25:29.449 [2024-07-15 18:51:03.890453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85240 is same with the state(5) to be set 00:25:29.449 [2024-07-15 18:51:03.890469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe85240 (9): Bad file descriptor 00:25:29.449 [2024-07-15 18:51:03.890484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.449 [2024-07-15 18:51:03.890494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.449 [2024-07-15 18:51:03.890506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.449 [2024-07-15 18:51:03.890526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.449 [2024-07-15 18:51:03.890535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.449 18:51:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 97320 00:25:31.974 [2024-07-15 18:51:05.890820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.975 [2024-07-15 18:51:05.890878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe85240 with addr=10.0.0.2, port=4420 00:25:31.975 [2024-07-15 18:51:05.890892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85240 is same with the state(5) to be set 00:25:31.975 [2024-07-15 18:51:05.890920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe85240 (9): Bad file descriptor 00:25:31.975 [2024-07-15 18:51:05.890938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.975 [2024-07-15 18:51:05.890957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.975 [2024-07-15 18:51:05.890969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.975 [2024-07-15 18:51:05.890992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.975 [2024-07-15 18:51:05.891003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.872 [2024-07-15 18:51:07.891224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.872 [2024-07-15 18:51:07.891280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe85240 with addr=10.0.0.2, port=4420 00:25:33.872 [2024-07-15 18:51:07.891295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85240 is same with the state(5) to be set 00:25:33.872 [2024-07-15 18:51:07.891321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe85240 (9): Bad file descriptor 00:25:33.872 [2024-07-15 18:51:07.891337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.872 [2024-07-15 18:51:07.891347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.872 [2024-07-15 18:51:07.891358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.872 [2024-07-15 18:51:07.891379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.872 [2024-07-15 18:51:07.891388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.770 [2024-07-15 18:51:09.891547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.770 [2024-07-15 18:51:09.891604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.770 [2024-07-15 18:51:09.891616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.770 [2024-07-15 18:51:09.891627] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:35.770 [2024-07-15 18:51:09.891652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.730 00:25:36.730 Latency(us) 00:25:36.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.730 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:36.730 NVMe0n1 : 8.20 3062.28 11.96 15.61 0.00 41624.59 1981.68 7030452.42 00:25:36.730 =================================================================================================================== 00:25:36.730 Total : 3062.28 11.96 15.61 0.00 41624.59 1981.68 7030452.42 00:25:36.730 0 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:36.730 Attaching 5 probes... 00:25:36.730 1212.062069: reset bdev controller NVMe0 00:25:36.730 1212.172967: reconnect bdev controller NVMe0 00:25:36.730 3212.491090: reconnect delay bdev controller NVMe0 00:25:36.730 3212.513520: reconnect bdev controller NVMe0 00:25:36.730 5212.895704: reconnect delay bdev controller NVMe0 00:25:36.730 5212.918568: reconnect bdev controller NVMe0 00:25:36.730 7213.326872: reconnect delay bdev controller NVMe0 00:25:36.730 7213.349436: reconnect bdev controller NVMe0 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 97267 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97243 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 97243 ']' 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 97243 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97243 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:36.730 killing process with pid 97243 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97243' 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 97243 00:25:36.730 Received shutdown signal, test time was about 8.266244 seconds 00:25:36.730 00:25:36.730 Latency(us) 00:25:36.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.730 =================================================================================================================== 00:25:36.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.730 18:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 97243 00:25:36.730 18:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.987 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.987 rmmod nvme_tcp 00:25:36.987 rmmod nvme_fabrics 00:25:36.987 rmmod nvme_keyring 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96661 ']' 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96661 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96661 ']' 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96661 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.988 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96661 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:37.246 killing process with pid 96661 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96661' 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96661 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96661 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.246 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.504 18:51:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:37.504 00:25:37.504 real 0m46.986s 00:25:37.504 user 2m17.786s 00:25:37.504 sys 0m5.845s 00:25:37.504 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.504 18:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.504 ************************************ 00:25:37.504 END TEST nvmf_timeout 00:25:37.504 ************************************ 00:25:37.504 18:51:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:37.504 18:51:11 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:25:37.504 18:51:11 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:37.504 18:51:11 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.504 18:51:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.504 18:51:11 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:37.504 00:25:37.504 real 15m51.905s 00:25:37.504 user 41m23.203s 00:25:37.504 sys 4m2.351s 00:25:37.504 18:51:11 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.504 18:51:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.504 ************************************ 00:25:37.504 END TEST nvmf_tcp 00:25:37.504 ************************************ 00:25:37.504 18:51:11 -- common/autotest_common.sh@1142 -- # return 0 00:25:37.504 18:51:11 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:37.504 18:51:11 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:37.504 18:51:11 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:37.504 18:51:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.504 18:51:11 -- common/autotest_common.sh@10 -- # set +x 00:25:37.504 ************************************ 00:25:37.504 START TEST spdkcli_nvmf_tcp 00:25:37.504 ************************************ 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:37.504 * Looking for test storage... 00:25:37.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.504 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.763 18:51:11 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=97534 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 97534 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 97534 ']' 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.763 18:51:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.763 [2024-07-15 18:51:12.080543] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:37.763 [2024-07-15 18:51:12.080670] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97534 ] 00:25:37.763 [2024-07-15 18:51:12.224850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:38.021 [2024-07-15 18:51:12.327739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.021 [2024-07-15 18:51:12.327744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.953 18:51:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:38.953 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:38.953 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:38.953 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:38.953 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:38.953 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:38.953 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:38.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:38.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:38.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:38.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:38.953 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:38.953 ' 00:25:41.482 [2024-07-15 18:51:15.843982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.891 [2024-07-15 18:51:17.149164] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:45.418 [2024-07-15 18:51:19.538753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:47.310 [2024-07-15 18:51:21.600322] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:48.682 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:48.682 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:48.682 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:48.682 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:48.682 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:48.682 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:48.682 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:48.682 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:48.682 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:48.682 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:48.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:48.682 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:48.942 18:51:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.508 18:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:49.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:49.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:49.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:49.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:49.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:49.508 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:49.508 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:49.508 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:49.508 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:49.508 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:49.508 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:49.508 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:49.508 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:49.508 ' 00:25:54.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:54.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:54.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:54.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:54.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:54.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:54.774 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:54.774 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:54.774 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:54.774 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:54.774 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:54.774 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:54.774 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:54.774 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 97534 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97534 ']' 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97534 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97534 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:55.031 killing process with pid 97534 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97534' 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 97534 00:25:55.031 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 97534 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 97534 ']' 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 97534 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97534 ']' 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97534 00:25:55.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (97534) - No such process 00:25:55.289 Process with pid 97534 is not found 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 97534 is not found' 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:55.289 ************************************ 00:25:55.289 END TEST spdkcli_nvmf_tcp 00:25:55.289 ************************************ 00:25:55.289 00:25:55.289 real 0m17.715s 00:25:55.289 user 0m38.538s 00:25:55.289 sys 0m1.040s 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:55.289 18:51:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.289 18:51:29 -- common/autotest_common.sh@1142 -- # return 0 00:25:55.289 18:51:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:55.289 18:51:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:55.289 18:51:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.289 18:51:29 -- common/autotest_common.sh@10 -- # set +x 00:25:55.289 ************************************ 00:25:55.289 START TEST nvmf_identify_passthru 00:25:55.289 ************************************ 00:25:55.289 18:51:29 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:55.289 * Looking for test storage... 00:25:55.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:55.289 18:51:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.289 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:55.290 18:51:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.290 18:51:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.290 18:51:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.290 18:51:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.290 18:51:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.290 18:51:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.290 18:51:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:55.290 18:51:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:55.290 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:55.548 18:51:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:55.548 18:51:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.548 18:51:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.548 18:51:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.548 18:51:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.548 18:51:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.548 18:51:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.548 18:51:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:55.548 18:51:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.548 18:51:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.548 18:51:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:55.548 18:51:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:55.548 Cannot find device "nvmf_tgt_br" 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:55.548 Cannot find device "nvmf_tgt_br2" 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:55.548 Cannot find device "nvmf_tgt_br" 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:55.548 Cannot find device "nvmf_tgt_br2" 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:55.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:55.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:25:55.548 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:55.549 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:55.549 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:55.549 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:55.549 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:55.549 18:51:29 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:55.549 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:55.549 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:55.549 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:55.806 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:55.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:25:55.807 00:25:55.807 --- 10.0.0.2 ping statistics --- 00:25:55.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.807 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:55.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:55.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:25:55.807 00:25:55.807 --- 10.0.0.3 ping statistics --- 00:25:55.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.807 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:55.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:25:55.807 00:25:55.807 --- 10.0.0.1 ping statistics --- 00:25:55.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.807 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.807 18:51:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.807 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:55.807 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:55.807 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:25:55.807 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:25:55.807 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:25:55.807 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:55.807 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:55.807 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:25:56.064 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:56.064 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:25:56.064 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:56.064 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:56.322 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:56.322 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.322 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.322 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=98041 00:25:56.322 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:56.322 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.322 18:51:30 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 98041 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 98041 ']' 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.322 18:51:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.322 [2024-07-15 18:51:30.768332] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:56.322 [2024-07-15 18:51:30.769219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.580 [2024-07-15 18:51:30.930767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.580 [2024-07-15 18:51:31.048461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.580 [2024-07-15 18:51:31.048527] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.580 [2024-07-15 18:51:31.048552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.580 [2024-07-15 18:51:31.048574] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.580 [2024-07-15 18:51:31.048589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.580 [2024-07-15 18:51:31.048766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.580 [2024-07-15 18:51:31.048926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.580 [2024-07-15 18:51:31.049665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.580 [2024-07-15 18:51:31.049676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:57.514 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.514 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 [2024-07-15 18:51:31.847482] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.514 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 [2024-07-15 18:51:31.856873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.514 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 Nvme0n1 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.514 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.514 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.514 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.773 18:51:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.773 18:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.773 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.773 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.773 [2024-07-15 18:51:32.003813] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.773 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.773 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:57.773 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.773 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.773 [ 00:25:57.773 { 00:25:57.773 "allow_any_host": true, 00:25:57.773 "hosts": [], 00:25:57.773 "listen_addresses": [], 00:25:57.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:57.773 "subtype": "Discovery" 00:25:57.773 }, 00:25:57.773 { 00:25:57.773 "allow_any_host": true, 00:25:57.773 "hosts": [], 00:25:57.773 "listen_addresses": [ 00:25:57.774 { 00:25:57.774 "adrfam": "IPv4", 00:25:57.774 "traddr": "10.0.0.2", 00:25:57.774 "trsvcid": "4420", 00:25:57.774 "trtype": "TCP" 00:25:57.774 } 00:25:57.774 ], 00:25:57.774 "max_cntlid": 65519, 00:25:57.774 "max_namespaces": 1, 00:25:57.774 "min_cntlid": 1, 00:25:57.774 "model_number": "SPDK bdev Controller", 00:25:57.774 "namespaces": [ 00:25:57.774 { 00:25:57.774 "bdev_name": "Nvme0n1", 00:25:57.774 "name": "Nvme0n1", 00:25:57.774 "nguid": "819ADAC4DE194725A1D3115E45DAA5FF", 00:25:57.774 "nsid": 1, 00:25:57.774 "uuid": "819adac4-de19-4725-a1d3-115e45daa5ff" 00:25:57.774 } 00:25:57.774 ], 00:25:57.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.774 "serial_number": "SPDK00000000000001", 00:25:57.774 "subtype": "NVMe" 00:25:57.774 } 00:25:57.774 ] 00:25:57.774 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.774 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:57.774 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:57.774 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:57.774 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:57.774 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:57.774 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:57.774 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:58.032 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:58.032 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:58.032 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:58.032 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.032 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.032 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.032 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.032 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:58.032 18:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:58.032 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.032 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:58.032 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.032 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:58.032 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.032 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.032 rmmod nvme_tcp 00:25:58.291 rmmod nvme_fabrics 00:25:58.291 rmmod nvme_keyring 00:25:58.291 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.291 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:58.291 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:58.291 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 98041 ']' 00:25:58.291 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 98041 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 98041 ']' 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 98041 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98041 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:58.291 killing process with pid 98041 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98041' 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 98041 00:25:58.291 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 98041 00:25:58.551 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.551 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.551 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.551 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.551 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.551 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.551 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:58.551 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.551 18:51:32 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:58.551 00:25:58.551 real 0m3.200s 00:25:58.551 user 0m7.482s 00:25:58.551 sys 0m0.908s 00:25:58.551 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:58.551 18:51:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.551 ************************************ 00:25:58.551 END TEST nvmf_identify_passthru 00:25:58.551 ************************************ 00:25:58.551 18:51:32 -- common/autotest_common.sh@1142 -- # return 0 00:25:58.551 18:51:32 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:58.551 18:51:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:58.551 18:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.551 18:51:32 -- common/autotest_common.sh@10 -- # set +x 00:25:58.551 ************************************ 00:25:58.551 START TEST nvmf_dif 00:25:58.551 ************************************ 00:25:58.551 18:51:32 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:58.551 * Looking for test storage... 00:25:58.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:58.551 18:51:32 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.551 18:51:32 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:58.551 18:51:33 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.551 18:51:33 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.551 18:51:33 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.551 18:51:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.551 18:51:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.551 18:51:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.551 18:51:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:58.551 18:51:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:58.551 18:51:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:58.551 18:51:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:58.551 18:51:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:58.551 18:51:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:58.551 18:51:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.551 18:51:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:58.551 18:51:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:58.551 18:51:33 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:58.809 Cannot find device "nvmf_tgt_br" 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@155 -- # true 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:58.809 Cannot find device "nvmf_tgt_br2" 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@156 -- # true 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:58.809 Cannot find device "nvmf_tgt_br" 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@158 -- # true 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:58.809 Cannot find device "nvmf_tgt_br2" 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@159 -- # true 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:58.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@162 -- # true 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:58.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@163 -- # true 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:58.809 18:51:33 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:59.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:25:59.066 00:25:59.066 --- 10.0.0.2 ping statistics --- 00:25:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.066 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:59.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:59.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:25:59.066 00:25:59.066 --- 10.0.0.3 ping statistics --- 00:25:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.066 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:59.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:25:59.066 00:25:59.066 --- 10.0.0.1 ping statistics --- 00:25:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.066 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:59.066 18:51:33 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:59.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.323 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.323 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.581 18:51:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:59.581 18:51:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=98393 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 98393 00:25:59.581 18:51:33 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 98393 ']' 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.581 18:51:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:59.581 [2024-07-15 18:51:33.919324] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:59.581 [2024-07-15 18:51:33.919450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.839 [2024-07-15 18:51:34.064251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.839 [2024-07-15 18:51:34.165516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.839 [2024-07-15 18:51:34.165596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.839 [2024-07-15 18:51:34.165613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.839 [2024-07-15 18:51:34.165629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.839 [2024-07-15 18:51:34.165643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.839 [2024-07-15 18:51:34.165687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.451 18:51:34 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.451 18:51:34 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:00.451 18:51:34 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.451 18:51:34 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.451 18:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.709 18:51:34 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.710 18:51:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:00.710 18:51:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:00.710 18:51:34 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.710 18:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 [2024-07-15 18:51:34.983767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.710 18:51:34 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.710 18:51:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:00.710 18:51:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:00.710 18:51:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.710 18:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 ************************************ 00:26:00.710 START TEST fio_dif_1_default 00:26:00.710 ************************************ 00:26:00.710 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 bdev_null0 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 [2024-07-15 18:51:35.031859] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.710 { 00:26:00.710 "params": { 00:26:00.710 "name": "Nvme$subsystem", 00:26:00.710 "trtype": "$TEST_TRANSPORT", 00:26:00.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.710 "adrfam": "ipv4", 00:26:00.710 "trsvcid": "$NVMF_PORT", 00:26:00.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.710 "hdgst": ${hdgst:-false}, 00:26:00.710 "ddgst": ${ddgst:-false} 00:26:00.710 }, 00:26:00.710 "method": "bdev_nvme_attach_controller" 00:26:00.710 } 00:26:00.710 EOF 00:26:00.710 )") 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:00.710 "params": { 00:26:00.710 "name": "Nvme0", 00:26:00.710 "trtype": "tcp", 00:26:00.710 "traddr": "10.0.0.2", 00:26:00.710 "adrfam": "ipv4", 00:26:00.710 "trsvcid": "4420", 00:26:00.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.710 "hdgst": false, 00:26:00.710 "ddgst": false 00:26:00.710 }, 00:26:00.710 "method": "bdev_nvme_attach_controller" 00:26:00.710 }' 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:00.710 18:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.968 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.968 fio-3.35 00:26:00.968 Starting 1 thread 00:26:13.160 00:26:13.160 filename0: (groupid=0, jobs=1): err= 0: pid=98473: Mon Jul 15 18:51:45 2024 00:26:13.160 read: IOPS=1138, BW=4553KiB/s (4662kB/s)(44.5MiB/10019msec) 00:26:13.160 slat (nsec): min=5885, max=58236, avg=6953.68, stdev=2147.53 00:26:13.160 clat (usec): min=332, max=42436, avg=3494.47, stdev=10721.57 00:26:13.160 lat (usec): min=338, max=42446, avg=3501.42, stdev=10721.59 00:26:13.160 clat percentiles (usec): 00:26:13.160 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 379], 00:26:13.160 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 424], 00:26:13.160 | 70.00th=[ 441], 80.00th=[ 474], 90.00th=[ 537], 95.00th=[40633], 00:26:13.160 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:26:13.160 | 99.99th=[42206] 00:26:13.160 bw ( KiB/s): min= 2144, max=14272, per=100.00%, avg=4560.00, stdev=2976.83, samples=20 00:26:13.160 iops : min= 536, max= 3568, avg=1140.00, stdev=744.21, samples=20 00:26:13.160 lat (usec) : 500=82.34%, 750=9.98%, 1000=0.04% 00:26:13.160 lat (msec) : 4=0.04%, 50=7.61% 00:26:13.160 cpu : usr=84.86%, sys=14.50%, ctx=23, majf=0, minf=9 00:26:13.160 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.160 issued rwts: total=11404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.160 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.160 00:26:13.160 Run status group 0 (all jobs): 00:26:13.160 READ: bw=4553KiB/s (4662kB/s), 4553KiB/s-4553KiB/s (4662kB/s-4662kB/s), io=44.5MiB (46.7MB), run=10019-10019msec 00:26:13.160 18:51:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:13.160 18:51:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:13.160 18:51:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.160 18:51:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.160 18:51:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:13.160 18:51:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.161 18:51:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 ************************************ 00:26:13.161 END TEST fio_dif_1_default 00:26:13.161 ************************************ 00:26:13.161 00:26:13.161 real 0m11.017s 00:26:13.161 user 0m9.137s 00:26:13.161 sys 0m1.737s 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:13.161 18:51:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:13.161 18:51:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:13.161 18:51:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 ************************************ 00:26:13.161 START TEST fio_dif_1_multi_subsystems 00:26:13.161 ************************************ 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 bdev_null0 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 [2024-07-15 18:51:46.108432] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 bdev_null1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.161 { 00:26:13.161 "params": { 00:26:13.161 "name": "Nvme$subsystem", 00:26:13.161 "trtype": "$TEST_TRANSPORT", 00:26:13.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.161 "adrfam": "ipv4", 00:26:13.161 "trsvcid": "$NVMF_PORT", 00:26:13.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.161 "hdgst": ${hdgst:-false}, 00:26:13.161 "ddgst": ${ddgst:-false} 00:26:13.161 }, 00:26:13.161 "method": "bdev_nvme_attach_controller" 00:26:13.161 } 00:26:13.161 EOF 00:26:13.161 )") 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.161 { 00:26:13.161 "params": { 00:26:13.161 "name": "Nvme$subsystem", 00:26:13.161 "trtype": "$TEST_TRANSPORT", 00:26:13.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.161 "adrfam": "ipv4", 00:26:13.161 "trsvcid": "$NVMF_PORT", 00:26:13.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.161 "hdgst": ${hdgst:-false}, 00:26:13.161 "ddgst": ${ddgst:-false} 00:26:13.161 }, 00:26:13.161 "method": "bdev_nvme_attach_controller" 00:26:13.161 } 00:26:13.161 EOF 00:26:13.161 )") 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:13.161 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:13.161 "params": { 00:26:13.161 "name": "Nvme0", 00:26:13.161 "trtype": "tcp", 00:26:13.161 "traddr": "10.0.0.2", 00:26:13.161 "adrfam": "ipv4", 00:26:13.161 "trsvcid": "4420", 00:26:13.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.162 "hdgst": false, 00:26:13.162 "ddgst": false 00:26:13.162 }, 00:26:13.162 "method": "bdev_nvme_attach_controller" 00:26:13.162 },{ 00:26:13.162 "params": { 00:26:13.162 "name": "Nvme1", 00:26:13.162 "trtype": "tcp", 00:26:13.162 "traddr": "10.0.0.2", 00:26:13.162 "adrfam": "ipv4", 00:26:13.162 "trsvcid": "4420", 00:26:13.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.162 "hdgst": false, 00:26:13.162 "ddgst": false 00:26:13.162 }, 00:26:13.162 "method": "bdev_nvme_attach_controller" 00:26:13.162 }' 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:13.162 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.162 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:13.162 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:13.162 fio-3.35 00:26:13.162 Starting 2 threads 00:26:23.131 00:26:23.131 filename0: (groupid=0, jobs=1): err= 0: pid=98635: Mon Jul 15 18:51:57 2024 00:26:23.131 read: IOPS=180, BW=724KiB/s (741kB/s)(7264KiB/10036msec) 00:26:23.131 slat (nsec): min=5930, max=74045, avg=8619.21, stdev=4851.98 00:26:23.131 clat (usec): min=335, max=42464, avg=22079.04, stdev=20186.30 00:26:23.131 lat (usec): min=341, max=42472, avg=22087.66, stdev=20185.89 00:26:23.131 clat percentiles (usec): 00:26:23.131 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 375], 00:26:23.131 | 30.00th=[ 400], 40.00th=[ 457], 50.00th=[40633], 60.00th=[40633], 00:26:23.131 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:23.131 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:23.131 | 99.99th=[42206] 00:26:23.131 bw ( KiB/s): min= 448, max= 960, per=50.85%, avg=724.75, stdev=120.34, samples=20 00:26:23.131 iops : min= 112, max= 240, avg=181.15, stdev=30.10, samples=20 00:26:23.131 lat (usec) : 500=41.02%, 750=4.30%, 1000=0.94% 00:26:23.131 lat (msec) : 2=0.22%, 50=53.52% 00:26:23.131 cpu : usr=92.26%, sys=7.36%, ctx=10, majf=0, minf=9 00:26:23.131 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.131 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.131 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:23.131 filename1: (groupid=0, jobs=1): err= 0: pid=98636: Mon Jul 15 18:51:57 2024 00:26:23.131 read: IOPS=174, BW=700KiB/s (717kB/s)(7024KiB/10035msec) 00:26:23.131 slat (nsec): min=5778, max=72909, avg=9654.55, stdev=5687.96 00:26:23.131 clat (usec): min=332, max=41509, avg=22828.06, stdev=20116.35 00:26:23.131 lat (usec): min=338, max=41518, avg=22837.71, stdev=20115.95 00:26:23.131 clat percentiles (usec): 00:26:23.131 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 388], 00:26:23.131 | 30.00th=[ 420], 40.00th=[ 619], 50.00th=[40633], 60.00th=[40633], 00:26:23.131 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:23.131 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:23.131 | 99.99th=[41681] 00:26:23.131 bw ( KiB/s): min= 512, max= 1024, per=49.17%, avg=700.70, stdev=131.23, samples=20 00:26:23.131 iops : min= 128, max= 256, avg=175.15, stdev=32.79, samples=20 00:26:23.131 lat (usec) : 500=37.93%, 750=4.95%, 1000=1.77% 00:26:23.131 lat (msec) : 50=55.35% 00:26:23.131 cpu : usr=92.53%, sys=7.04%, ctx=21, majf=0, minf=0 00:26:23.131 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.131 issued rwts: total=1756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.131 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:23.131 00:26:23.131 Run status group 0 (all jobs): 00:26:23.131 READ: bw=1424KiB/s (1458kB/s), 700KiB/s-724KiB/s (717kB/s-741kB/s), io=14.0MiB (14.6MB), run=10035-10036msec 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 ************************************ 00:26:23.131 END TEST fio_dif_1_multi_subsystems 00:26:23.131 ************************************ 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 00:26:23.131 real 0m11.243s 00:26:23.131 user 0m19.352s 00:26:23.131 sys 0m1.761s 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 18:51:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:23.131 18:51:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:23.131 18:51:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:23.131 18:51:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 ************************************ 00:26:23.131 START TEST fio_dif_rand_params 00:26:23.131 ************************************ 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 bdev_null0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.131 [2024-07-15 18:51:57.402879] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.131 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.132 { 00:26:23.132 "params": { 00:26:23.132 "name": "Nvme$subsystem", 00:26:23.132 "trtype": "$TEST_TRANSPORT", 00:26:23.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.132 "adrfam": "ipv4", 00:26:23.132 "trsvcid": "$NVMF_PORT", 00:26:23.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.132 "hdgst": ${hdgst:-false}, 00:26:23.132 "ddgst": ${ddgst:-false} 00:26:23.132 }, 00:26:23.132 "method": "bdev_nvme_attach_controller" 00:26:23.132 } 00:26:23.132 EOF 00:26:23.132 )") 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:23.132 "params": { 00:26:23.132 "name": "Nvme0", 00:26:23.132 "trtype": "tcp", 00:26:23.132 "traddr": "10.0.0.2", 00:26:23.132 "adrfam": "ipv4", 00:26:23.132 "trsvcid": "4420", 00:26:23.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.132 "hdgst": false, 00:26:23.132 "ddgst": false 00:26:23.132 }, 00:26:23.132 "method": "bdev_nvme_attach_controller" 00:26:23.132 }' 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:23.132 18:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.391 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:23.391 ... 00:26:23.391 fio-3.35 00:26:23.391 Starting 3 threads 00:26:29.998 00:26:29.998 filename0: (groupid=0, jobs=1): err= 0: pid=98792: Mon Jul 15 18:52:03 2024 00:26:29.998 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5002msec) 00:26:29.998 slat (nsec): min=5988, max=28571, avg=9295.78, stdev=3577.16 00:26:29.999 clat (usec): min=3746, max=15406, avg=12607.24, stdev=2059.09 00:26:29.999 lat (usec): min=3753, max=15429, avg=12616.54, stdev=2059.30 00:26:29.999 clat percentiles (usec): 00:26:29.999 | 1.00th=[ 3884], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[12256], 00:26:29.999 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:26:29.999 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:26:29.999 | 99.00th=[15139], 99.50th=[15270], 99.90th=[15401], 99.95th=[15401], 00:26:29.999 | 99.99th=[15401] 00:26:29.999 bw ( KiB/s): min=27648, max=39246, per=28.82%, avg=30387.33, stdev=3483.16, samples=9 00:26:29.999 iops : min= 216, max= 306, avg=237.33, stdev=27.02, samples=9 00:26:29.999 lat (msec) : 4=1.77%, 10=8.67%, 20=89.56% 00:26:29.999 cpu : usr=90.98%, sys=7.98%, ctx=10, majf=0, minf=0 00:26:29.999 IO depths : 1=32.1%, 2=67.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.999 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.999 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:29.999 filename0: (groupid=0, jobs=1): err= 0: pid=98793: Mon Jul 15 18:52:03 2024 00:26:29.999 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(189MiB/5002msec) 00:26:29.999 slat (nsec): min=6101, max=38761, avg=12067.84, stdev=4184.08 00:26:29.999 clat (usec): min=5518, max=52140, avg=9913.67, stdev=4147.67 00:26:29.999 lat (usec): min=5526, max=52149, avg=9925.74, stdev=4147.74 00:26:29.999 clat percentiles (usec): 00:26:29.999 | 1.00th=[ 6849], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8848], 00:26:29.999 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:26:29.999 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:26:29.999 | 99.00th=[13304], 99.50th=[50594], 99.90th=[51643], 99.95th=[52167], 00:26:29.999 | 99.99th=[52167] 00:26:29.999 bw ( KiB/s): min=30720, max=41472, per=36.36%, avg=38343.11, stdev=3240.14, samples=9 00:26:29.999 iops : min= 240, max= 324, avg=299.56, stdev=25.31, samples=9 00:26:29.999 lat (msec) : 10=71.08%, 20=27.93%, 50=0.33%, 100=0.66% 00:26:29.999 cpu : usr=90.32%, sys=8.36%, ctx=10, majf=0, minf=0 00:26:29.999 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.999 issued rwts: total=1511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.999 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:29.999 filename0: (groupid=0, jobs=1): err= 0: pid=98794: Mon Jul 15 18:52:03 2024 00:26:29.999 read: IOPS=284, BW=35.5MiB/s (37.3MB/s)(178MiB/5002msec) 00:26:29.999 slat (nsec): min=6006, max=33277, avg=10410.86, stdev=3557.40 00:26:29.999 clat (usec): min=4820, max=55361, avg=10536.01, stdev=3982.49 00:26:29.999 lat (usec): min=4830, max=55389, avg=10546.42, stdev=3982.76 00:26:29.999 clat percentiles (usec): 00:26:29.999 | 1.00th=[ 6128], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9503], 00:26:29.999 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[10552], 00:26:29.999 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:26:29.999 | 99.00th=[13173], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:26:29.999 | 99.99th=[55313] 00:26:29.999 bw ( KiB/s): min=28416, max=39424, per=34.74%, avg=36636.44, stdev=3411.73, samples=9 00:26:29.999 iops : min= 222, max= 308, avg=286.22, stdev=26.65, samples=9 00:26:29.999 lat (msec) : 10=38.33%, 20=60.83%, 50=0.21%, 100=0.63% 00:26:29.999 cpu : usr=90.34%, sys=8.48%, ctx=60, majf=0, minf=0 00:26:29.999 IO depths : 1=6.5%, 2=93.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.999 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.999 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:29.999 00:26:29.999 Run status group 0 (all jobs): 00:26:29.999 READ: bw=103MiB/s (108MB/s), 29.7MiB/s-37.8MiB/s (31.1MB/s-39.6MB/s), io=515MiB (540MB), run=5002-5002msec 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 bdev_null0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 [2024-07-15 18:52:03.431581] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 bdev_null1 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 bdev_null2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.999 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.000 { 00:26:30.000 "params": { 00:26:30.000 "name": "Nvme$subsystem", 00:26:30.000 "trtype": "$TEST_TRANSPORT", 00:26:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.000 "adrfam": "ipv4", 00:26:30.000 "trsvcid": "$NVMF_PORT", 00:26:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.000 "hdgst": ${hdgst:-false}, 00:26:30.000 "ddgst": ${ddgst:-false} 00:26:30.000 }, 00:26:30.000 "method": "bdev_nvme_attach_controller" 00:26:30.000 } 00:26:30.000 EOF 00:26:30.000 )") 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.000 { 00:26:30.000 "params": { 00:26:30.000 "name": "Nvme$subsystem", 00:26:30.000 "trtype": "$TEST_TRANSPORT", 00:26:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.000 "adrfam": "ipv4", 00:26:30.000 "trsvcid": "$NVMF_PORT", 00:26:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.000 "hdgst": ${hdgst:-false}, 00:26:30.000 "ddgst": ${ddgst:-false} 00:26:30.000 }, 00:26:30.000 "method": "bdev_nvme_attach_controller" 00:26:30.000 } 00:26:30.000 EOF 00:26:30.000 )") 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.000 { 00:26:30.000 "params": { 00:26:30.000 "name": "Nvme$subsystem", 00:26:30.000 "trtype": "$TEST_TRANSPORT", 00:26:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.000 "adrfam": "ipv4", 00:26:30.000 "trsvcid": "$NVMF_PORT", 00:26:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.000 "hdgst": ${hdgst:-false}, 00:26:30.000 "ddgst": ${ddgst:-false} 00:26:30.000 }, 00:26:30.000 "method": "bdev_nvme_attach_controller" 00:26:30.000 } 00:26:30.000 EOF 00:26:30.000 )") 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:30.000 "params": { 00:26:30.000 "name": "Nvme0", 00:26:30.000 "trtype": "tcp", 00:26:30.000 "traddr": "10.0.0.2", 00:26:30.000 "adrfam": "ipv4", 00:26:30.000 "trsvcid": "4420", 00:26:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:30.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:30.000 "hdgst": false, 00:26:30.000 "ddgst": false 00:26:30.000 }, 00:26:30.000 "method": "bdev_nvme_attach_controller" 00:26:30.000 },{ 00:26:30.000 "params": { 00:26:30.000 "name": "Nvme1", 00:26:30.000 "trtype": "tcp", 00:26:30.000 "traddr": "10.0.0.2", 00:26:30.000 "adrfam": "ipv4", 00:26:30.000 "trsvcid": "4420", 00:26:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.000 "hdgst": false, 00:26:30.000 "ddgst": false 00:26:30.000 }, 00:26:30.000 "method": "bdev_nvme_attach_controller" 00:26:30.000 },{ 00:26:30.000 "params": { 00:26:30.000 "name": "Nvme2", 00:26:30.000 "trtype": "tcp", 00:26:30.000 "traddr": "10.0.0.2", 00:26:30.000 "adrfam": "ipv4", 00:26:30.000 "trsvcid": "4420", 00:26:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:30.000 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:30.000 "hdgst": false, 00:26:30.000 "ddgst": false 00:26:30.000 }, 00:26:30.000 "method": "bdev_nvme_attach_controller" 00:26:30.000 }' 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:30.000 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.000 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:30.000 ... 00:26:30.000 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:30.000 ... 00:26:30.000 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:30.000 ... 00:26:30.000 fio-3.35 00:26:30.000 Starting 24 threads 00:26:42.259 00:26:42.259 filename0: (groupid=0, jobs=1): err= 0: pid=98896: Mon Jul 15 18:52:14 2024 00:26:42.259 read: IOPS=258, BW=1032KiB/s (1057kB/s)(10.1MiB/10038msec) 00:26:42.259 slat (usec): min=4, max=4036, avg=12.24, stdev=80.44 00:26:42.259 clat (msec): min=2, max=170, avg=61.91, stdev=24.92 00:26:42.259 lat (msec): min=2, max=170, avg=61.93, stdev=24.92 00:26:42.259 clat percentiles (msec): 00:26:42.259 | 1.00th=[ 6], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 43], 00:26:42.259 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 60], 60.00th=[ 64], 00:26:42.259 | 70.00th=[ 71], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 107], 00:26:42.259 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:26:42.259 | 99.99th=[ 171] 00:26:42.259 bw ( KiB/s): min= 640, max= 1650, per=4.68%, avg=1029.30, stdev=254.52, samples=20 00:26:42.259 iops : min= 160, max= 412, avg=257.30, stdev=63.57, samples=20 00:26:42.259 lat (msec) : 4=0.62%, 10=1.85%, 20=0.62%, 50=37.32%, 100=52.61% 00:26:42.259 lat (msec) : 250=6.99% 00:26:42.259 cpu : usr=35.66%, sys=2.08%, ctx=1055, majf=0, minf=0 00:26:42.259 IO depths : 1=1.0%, 2=2.9%, 4=11.1%, 8=72.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:42.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 issued rwts: total=2591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.259 filename0: (groupid=0, jobs=1): err= 0: pid=98897: Mon Jul 15 18:52:14 2024 00:26:42.259 read: IOPS=199, BW=799KiB/s (818kB/s)(8000KiB/10010msec) 00:26:42.259 slat (usec): min=4, max=8030, avg=23.51, stdev=253.30 00:26:42.259 clat (msec): min=12, max=146, avg=79.92, stdev=21.83 00:26:42.259 lat (msec): min=12, max=146, avg=79.95, stdev=21.83 00:26:42.259 clat percentiles (msec): 00:26:42.259 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 62], 00:26:42.259 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:26:42.259 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 123], 00:26:42.259 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 146], 00:26:42.259 | 99.99th=[ 146] 00:26:42.259 bw ( KiB/s): min= 512, max= 928, per=3.54%, avg=778.53, stdev=99.79, samples=19 00:26:42.259 iops : min= 128, max= 232, avg=194.58, stdev=24.96, samples=19 00:26:42.259 lat (msec) : 20=0.80%, 50=4.90%, 100=76.35%, 250=17.95% 00:26:42.259 cpu : usr=38.68%, sys=1.84%, ctx=1117, majf=0, minf=9 00:26:42.259 IO depths : 1=3.5%, 2=7.3%, 4=19.2%, 8=60.6%, 16=9.3%, 32=0.0%, >=64=0.0% 00:26:42.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.259 filename0: (groupid=0, jobs=1): err= 0: pid=98898: Mon Jul 15 18:52:14 2024 00:26:42.259 read: IOPS=228, BW=913KiB/s (935kB/s)(9176KiB/10047msec) 00:26:42.259 slat (nsec): min=6142, max=48524, avg=11000.78, stdev=5299.31 00:26:42.259 clat (msec): min=7, max=191, avg=69.90, stdev=24.80 00:26:42.259 lat (msec): min=7, max=191, avg=69.91, stdev=24.80 00:26:42.259 clat percentiles (msec): 00:26:42.259 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 50], 00:26:42.259 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:26:42.259 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 114], 00:26:42.259 | 99.00th=[ 142], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 192], 00:26:42.259 | 99.99th=[ 192] 00:26:42.259 bw ( KiB/s): min= 640, max= 1490, per=4.14%, avg=910.90, stdev=196.57, samples=20 00:26:42.259 iops : min= 160, max= 372, avg=227.70, stdev=49.06, samples=20 00:26:42.259 lat (msec) : 10=1.39%, 20=0.70%, 50=19.88%, 100=66.65%, 250=11.38% 00:26:42.259 cpu : usr=31.80%, sys=1.50%, ctx=873, majf=0, minf=9 00:26:42.259 IO depths : 1=1.3%, 2=3.1%, 4=11.9%, 8=71.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:42.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.259 filename0: (groupid=0, jobs=1): err= 0: pid=98899: Mon Jul 15 18:52:14 2024 00:26:42.259 read: IOPS=200, BW=801KiB/s (820kB/s)(8012KiB/10005msec) 00:26:42.259 slat (usec): min=6, max=4024, avg=19.42, stdev=175.74 00:26:42.259 clat (msec): min=12, max=169, avg=79.78, stdev=23.00 00:26:42.259 lat (msec): min=12, max=169, avg=79.80, stdev=23.01 00:26:42.259 clat percentiles (msec): 00:26:42.259 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:26:42.259 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 85], 00:26:42.259 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 121], 00:26:42.259 | 99.00th=[ 150], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:26:42.259 | 99.99th=[ 169] 00:26:42.259 bw ( KiB/s): min= 512, max= 912, per=3.58%, avg=786.84, stdev=121.85, samples=19 00:26:42.259 iops : min= 128, max= 228, avg=196.68, stdev=30.50, samples=19 00:26:42.259 lat (msec) : 20=0.55%, 50=6.59%, 100=73.99%, 250=18.87% 00:26:42.259 cpu : usr=38.31%, sys=1.95%, ctx=1155, majf=0, minf=9 00:26:42.259 IO depths : 1=3.1%, 2=6.8%, 4=17.7%, 8=62.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:42.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.259 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.259 filename0: (groupid=0, jobs=1): err= 0: pid=98900: Mon Jul 15 18:52:14 2024 00:26:42.259 read: IOPS=256, BW=1027KiB/s (1051kB/s)(10.0MiB/10017msec) 00:26:42.259 slat (usec): min=6, max=3975, avg=11.72, stdev=78.31 00:26:42.259 clat (msec): min=24, max=172, avg=62.27, stdev=21.34 00:26:42.259 lat (msec): min=24, max=172, avg=62.28, stdev=21.34 00:26:42.259 clat percentiles (msec): 00:26:42.259 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 45], 00:26:42.259 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 64], 00:26:42.259 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 90], 95.00th=[ 105], 00:26:42.259 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 174], 99.95th=[ 174], 00:26:42.259 | 99.99th=[ 174] 00:26:42.259 bw ( KiB/s): min= 592, max= 1344, per=4.65%, avg=1022.00, stdev=190.87, samples=20 00:26:42.260 iops : min= 148, max= 336, avg=255.50, stdev=47.72, samples=20 00:26:42.260 lat (msec) : 50=36.95%, 100=57.72%, 250=5.33% 00:26:42.260 cpu : usr=40.07%, sys=2.00%, ctx=1148, majf=0, minf=9 00:26:42.260 IO depths : 1=0.5%, 2=1.1%, 4=7.0%, 8=78.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:42.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 complete : 0=0.0%, 4=89.3%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.260 filename0: (groupid=0, jobs=1): err= 0: pid=98901: Mon Jul 15 18:52:14 2024 00:26:42.260 read: IOPS=230, BW=922KiB/s (944kB/s)(9240KiB/10024msec) 00:26:42.260 slat (usec): min=4, max=4007, avg=12.66, stdev=83.28 00:26:42.260 clat (msec): min=32, max=167, avg=69.27, stdev=23.95 00:26:42.260 lat (msec): min=32, max=167, avg=69.28, stdev=23.95 00:26:42.260 clat percentiles (msec): 00:26:42.260 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:26:42.260 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:26:42.260 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 103], 95.00th=[ 114], 00:26:42.260 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 169], 00:26:42.260 | 99.99th=[ 169] 00:26:42.260 bw ( KiB/s): min= 512, max= 1200, per=4.19%, avg=921.60, stdev=200.00, samples=20 00:26:42.260 iops : min= 128, max= 300, avg=230.40, stdev=50.00, samples=20 00:26:42.260 lat (msec) : 50=24.98%, 100=63.85%, 250=11.17% 00:26:42.260 cpu : usr=34.14%, sys=1.85%, ctx=898, majf=0, minf=9 00:26:42.260 IO depths : 1=1.1%, 2=2.4%, 4=10.0%, 8=74.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:42.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.260 filename0: (groupid=0, jobs=1): err= 0: pid=98902: Mon Jul 15 18:52:14 2024 00:26:42.260 read: IOPS=255, BW=1024KiB/s (1048kB/s)(10.0MiB/10025msec) 00:26:42.260 slat (nsec): min=5107, max=65620, avg=10613.45, stdev=4386.23 00:26:42.260 clat (msec): min=25, max=158, avg=62.39, stdev=23.67 00:26:42.260 lat (msec): min=25, max=158, avg=62.40, stdev=23.67 00:26:42.260 clat percentiles (msec): 00:26:42.260 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 43], 00:26:42.260 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 63], 00:26:42.260 | 70.00th=[ 69], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 112], 00:26:42.260 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:26:42.260 | 99.99th=[ 159] 00:26:42.260 bw ( KiB/s): min= 640, max= 1280, per=4.65%, avg=1022.40, stdev=232.20, samples=20 00:26:42.260 iops : min= 160, max= 320, avg=255.60, stdev=58.05, samples=20 00:26:42.260 lat (msec) : 50=41.19%, 100=49.88%, 250=8.92% 00:26:42.260 cpu : usr=40.89%, sys=2.13%, ctx=1606, majf=0, minf=9 00:26:42.260 IO depths : 1=0.7%, 2=1.8%, 4=8.1%, 8=76.3%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:42.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 issued rwts: total=2566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.260 filename0: (groupid=0, jobs=1): err= 0: pid=98903: Mon Jul 15 18:52:14 2024 00:26:42.260 read: IOPS=261, BW=1046KiB/s (1071kB/s)(10.3MiB/10037msec) 00:26:42.260 slat (usec): min=4, max=4004, avg=11.97, stdev=78.08 00:26:42.260 clat (msec): min=3, max=139, avg=61.03, stdev=20.52 00:26:42.260 lat (msec): min=3, max=139, avg=61.04, stdev=20.52 00:26:42.260 clat percentiles (msec): 00:26:42.260 | 1.00th=[ 8], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 46], 00:26:42.260 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 60], 60.00th=[ 64], 00:26:42.260 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 96], 00:26:42.260 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 140], 99.95th=[ 140], 00:26:42.260 | 99.99th=[ 140] 00:26:42.260 bw ( KiB/s): min= 664, max= 1792, per=4.75%, avg=1043.25, stdev=253.54, samples=20 00:26:42.260 iops : min= 166, max= 448, avg=260.80, stdev=63.40, samples=20 00:26:42.260 lat (msec) : 4=0.61%, 10=1.22%, 20=0.61%, 50=32.15%, 100=61.94% 00:26:42.260 lat (msec) : 250=3.47% 00:26:42.260 cpu : usr=43.42%, sys=1.95%, ctx=1339, majf=0, minf=9 00:26:42.260 IO depths : 1=1.6%, 2=3.9%, 4=12.5%, 8=70.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:42.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 issued rwts: total=2625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.260 filename1: (groupid=0, jobs=1): err= 0: pid=98904: Mon Jul 15 18:52:14 2024 00:26:42.260 read: IOPS=222, BW=891KiB/s (912kB/s)(8932KiB/10030msec) 00:26:42.260 slat (usec): min=5, max=8018, avg=22.49, stdev=282.92 00:26:42.260 clat (msec): min=29, max=172, avg=71.63, stdev=21.80 00:26:42.260 lat (msec): min=29, max=172, avg=71.66, stdev=21.80 00:26:42.260 clat percentiles (msec): 00:26:42.260 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 56], 00:26:42.260 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:26:42.260 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 103], 95.00th=[ 115], 00:26:42.260 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 174], 99.95th=[ 174], 00:26:42.260 | 99.99th=[ 174] 00:26:42.260 bw ( KiB/s): min= 640, max= 1072, per=4.05%, avg=889.60, stdev=131.93, samples=20 00:26:42.260 iops : min= 160, max= 268, avg=222.40, stdev=32.98, samples=20 00:26:42.260 lat (msec) : 50=17.60%, 100=71.47%, 250=10.93% 00:26:42.260 cpu : usr=32.50%, sys=1.65%, ctx=917, majf=0, minf=9 00:26:42.260 IO depths : 1=1.3%, 2=3.1%, 4=11.7%, 8=71.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:42.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 issued rwts: total=2233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.260 filename1: (groupid=0, jobs=1): err= 0: pid=98905: Mon Jul 15 18:52:14 2024 00:26:42.260 read: IOPS=198, BW=793KiB/s (812kB/s)(7944KiB/10016msec) 00:26:42.260 slat (usec): min=6, max=8039, avg=15.55, stdev=180.20 00:26:42.260 clat (msec): min=15, max=167, avg=80.51, stdev=23.75 00:26:42.260 lat (msec): min=15, max=167, avg=80.52, stdev=23.74 00:26:42.260 clat percentiles (msec): 00:26:42.260 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 61], 00:26:42.260 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:26:42.260 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 126], 00:26:42.260 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:26:42.260 | 99.99th=[ 169] 00:26:42.260 bw ( KiB/s): min= 472, max= 1056, per=3.57%, avg=785.84, stdev=132.94, samples=19 00:26:42.260 iops : min= 118, max= 264, avg=196.42, stdev=33.22, samples=19 00:26:42.260 lat (msec) : 20=0.30%, 50=5.89%, 100=76.84%, 250=16.97% 00:26:42.260 cpu : usr=31.65%, sys=1.59%, ctx=862, majf=0, minf=9 00:26:42.260 IO depths : 1=3.0%, 2=6.6%, 4=16.8%, 8=63.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:42.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.260 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.260 filename1: (groupid=0, jobs=1): err= 0: pid=98906: Mon Jul 15 18:52:14 2024 00:26:42.260 read: IOPS=218, BW=874KiB/s (894kB/s)(8736KiB/10001msec) 00:26:42.260 slat (usec): min=6, max=8026, avg=23.25, stdev=284.34 00:26:42.260 clat (usec): min=1624, max=160511, avg=73119.40, stdev=25853.75 00:26:42.260 lat (usec): min=1631, max=160527, avg=73142.65, stdev=25860.69 00:26:42.260 clat percentiles (usec): 00:26:42.260 | 1.00th=[ 1876], 5.00th=[ 39060], 10.00th=[ 44827], 20.00th=[ 57410], 00:26:42.260 | 30.00th=[ 61080], 40.00th=[ 65799], 50.00th=[ 68682], 60.00th=[ 72877], 00:26:42.260 | 70.00th=[ 83362], 80.00th=[ 91751], 90.00th=[106431], 95.00th=[122160], 00:26:42.260 | 99.00th=[143655], 99.50th=[147850], 99.90th=[160433], 99.95th=[160433], 00:26:42.260 | 99.99th=[160433] 00:26:42.261 bw ( KiB/s): min= 512, max= 1024, per=3.85%, avg=845.47, stdev=138.82, samples=19 00:26:42.261 iops : min= 128, max= 256, avg=211.37, stdev=34.71, samples=19 00:26:42.261 lat (msec) : 2=1.47%, 4=0.73%, 10=0.73%, 50=10.35%, 100=72.34% 00:26:42.261 lat (msec) : 250=14.38% 00:26:42.261 cpu : usr=41.11%, sys=2.03%, ctx=1374, majf=0, minf=9 00:26:42.261 IO depths : 1=2.9%, 2=6.1%, 4=15.2%, 8=65.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename1: (groupid=0, jobs=1): err= 0: pid=98907: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=235, BW=941KiB/s (964kB/s)(9444KiB/10037msec) 00:26:42.261 slat (usec): min=6, max=8021, avg=17.89, stdev=233.06 00:26:42.261 clat (msec): min=26, max=185, avg=67.86, stdev=23.92 00:26:42.261 lat (msec): min=26, max=185, avg=67.87, stdev=23.93 00:26:42.261 clat percentiles (msec): 00:26:42.261 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:26:42.261 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:26:42.261 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 110], 00:26:42.261 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 186], 00:26:42.261 | 99.99th=[ 186] 00:26:42.261 bw ( KiB/s): min= 640, max= 1392, per=4.27%, avg=938.00, stdev=177.37, samples=20 00:26:42.261 iops : min= 160, max= 348, avg=234.50, stdev=44.34, samples=20 00:26:42.261 lat (msec) : 50=26.43%, 100=65.57%, 250=8.01% 00:26:42.261 cpu : usr=35.35%, sys=1.82%, ctx=886, majf=0, minf=10 00:26:42.261 IO depths : 1=0.8%, 2=2.2%, 4=10.0%, 8=74.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=89.9%, 8=5.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename1: (groupid=0, jobs=1): err= 0: pid=98908: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=260, BW=1042KiB/s (1067kB/s)(10.2MiB/10044msec) 00:26:42.261 slat (usec): min=3, max=14138, avg=22.36, stdev=320.07 00:26:42.261 clat (msec): min=7, max=147, avg=61.20, stdev=21.29 00:26:42.261 lat (msec): min=7, max=148, avg=61.23, stdev=21.29 00:26:42.261 clat percentiles (msec): 00:26:42.261 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 44], 00:26:42.261 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 64], 00:26:42.261 | 70.00th=[ 70], 80.00th=[ 78], 90.00th=[ 88], 95.00th=[ 101], 00:26:42.261 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:26:42.261 | 99.99th=[ 148] 00:26:42.261 bw ( KiB/s): min= 640, max= 1410, per=4.73%, avg=1039.65, stdev=217.18, samples=20 00:26:42.261 iops : min= 160, max= 352, avg=259.85, stdev=54.26, samples=20 00:26:42.261 lat (msec) : 10=1.22%, 20=0.61%, 50=32.00%, 100=61.05%, 250=5.12% 00:26:42.261 cpu : usr=42.11%, sys=1.95%, ctx=1244, majf=0, minf=9 00:26:42.261 IO depths : 1=0.8%, 2=1.8%, 4=8.3%, 8=76.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename1: (groupid=0, jobs=1): err= 0: pid=98909: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=212, BW=851KiB/s (872kB/s)(8516KiB/10003msec) 00:26:42.261 slat (usec): min=3, max=4021, avg=14.88, stdev=122.93 00:26:42.261 clat (msec): min=2, max=161, avg=75.07, stdev=25.16 00:26:42.261 lat (msec): min=2, max=161, avg=75.08, stdev=25.16 00:26:42.261 clat percentiles (msec): 00:26:42.261 | 1.00th=[ 5], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 58], 00:26:42.261 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 75], 00:26:42.261 | 70.00th=[ 86], 80.00th=[ 94], 90.00th=[ 110], 95.00th=[ 129], 00:26:42.261 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 150], 99.95th=[ 163], 00:26:42.261 | 99.99th=[ 163] 00:26:42.261 bw ( KiB/s): min= 512, max= 1024, per=3.76%, avg=825.68, stdev=166.63, samples=19 00:26:42.261 iops : min= 128, max= 256, avg=206.42, stdev=41.66, samples=19 00:26:42.261 lat (msec) : 4=0.38%, 10=0.75%, 20=0.42%, 50=8.97%, 100=74.17% 00:26:42.261 lat (msec) : 250=15.31% 00:26:42.261 cpu : usr=42.93%, sys=2.16%, ctx=1311, majf=0, minf=9 00:26:42.261 IO depths : 1=2.9%, 2=6.3%, 4=16.4%, 8=64.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename1: (groupid=0, jobs=1): err= 0: pid=98910: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=220, BW=883KiB/s (904kB/s)(8836KiB/10009msec) 00:26:42.261 slat (usec): min=4, max=8017, avg=23.67, stdev=282.36 00:26:42.261 clat (msec): min=12, max=144, avg=72.32, stdev=21.00 00:26:42.261 lat (msec): min=12, max=144, avg=72.34, stdev=21.01 00:26:42.261 clat percentiles (msec): 00:26:42.261 | 1.00th=[ 30], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 58], 00:26:42.261 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 72], 00:26:42.261 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 102], 95.00th=[ 107], 00:26:42.261 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:26:42.261 | 99.99th=[ 146] 00:26:42.261 bw ( KiB/s): min= 640, max= 1248, per=3.97%, avg=873.79, stdev=149.25, samples=19 00:26:42.261 iops : min= 160, max= 312, avg=218.42, stdev=37.30, samples=19 00:26:42.261 lat (msec) : 20=0.32%, 50=11.14%, 100=75.65%, 250=12.90% 00:26:42.261 cpu : usr=42.05%, sys=2.08%, ctx=1568, majf=0, minf=9 00:26:42.261 IO depths : 1=2.1%, 2=4.7%, 4=13.7%, 8=68.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename1: (groupid=0, jobs=1): err= 0: pid=98911: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=234, BW=936KiB/s (959kB/s)(9392KiB/10032msec) 00:26:42.261 slat (usec): min=4, max=5996, avg=13.31, stdev=123.77 00:26:42.261 clat (msec): min=14, max=176, avg=68.19, stdev=24.22 00:26:42.261 lat (msec): min=14, max=176, avg=68.20, stdev=24.22 00:26:42.261 clat percentiles (msec): 00:26:42.261 | 1.00th=[ 27], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 47], 00:26:42.261 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 70], 00:26:42.261 | 70.00th=[ 78], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 109], 00:26:42.261 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 176], 99.95th=[ 176], 00:26:42.261 | 99.99th=[ 176] 00:26:42.261 bw ( KiB/s): min= 624, max= 1248, per=4.24%, avg=932.85, stdev=172.80, samples=20 00:26:42.261 iops : min= 156, max= 312, avg=233.20, stdev=43.20, samples=20 00:26:42.261 lat (msec) : 20=0.68%, 50=25.85%, 100=62.39%, 250=11.07% 00:26:42.261 cpu : usr=40.67%, sys=1.99%, ctx=1417, majf=0, minf=9 00:26:42.261 IO depths : 1=1.4%, 2=3.1%, 4=10.9%, 8=72.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename2: (groupid=0, jobs=1): err= 0: pid=98912: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=206, BW=826KiB/s (846kB/s)(8268KiB/10008msec) 00:26:42.261 slat (usec): min=4, max=4007, avg=12.90, stdev=88.02 00:26:42.261 clat (msec): min=34, max=159, avg=77.33, stdev=24.07 00:26:42.261 lat (msec): min=34, max=159, avg=77.35, stdev=24.07 00:26:42.261 clat percentiles (msec): 00:26:42.261 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:26:42.261 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 80], 00:26:42.261 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 129], 00:26:42.261 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 161], 00:26:42.261 | 99.99th=[ 161] 00:26:42.261 bw ( KiB/s): min= 552, max= 1088, per=3.71%, avg=815.74, stdev=158.19, samples=19 00:26:42.261 iops : min= 138, max= 272, avg=203.89, stdev=39.54, samples=19 00:26:42.261 lat (msec) : 50=12.09%, 100=71.21%, 250=16.69% 00:26:42.261 cpu : usr=36.95%, sys=1.76%, ctx=1090, majf=0, minf=9 00:26:42.261 IO depths : 1=2.0%, 2=4.2%, 4=13.2%, 8=69.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=90.6%, 8=4.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename2: (groupid=0, jobs=1): err= 0: pid=98913: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=201, BW=807KiB/s (827kB/s)(8076KiB/10005msec) 00:26:42.261 slat (usec): min=4, max=8022, avg=14.70, stdev=178.36 00:26:42.261 clat (msec): min=12, max=185, avg=79.20, stdev=26.03 00:26:42.261 lat (msec): min=12, max=185, avg=79.22, stdev=26.03 00:26:42.261 clat percentiles (msec): 00:26:42.261 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 60], 00:26:42.261 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 84], 00:26:42.261 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 112], 95.00th=[ 130], 00:26:42.261 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 186], 00:26:42.261 | 99.99th=[ 186] 00:26:42.261 bw ( KiB/s): min= 510, max= 976, per=3.60%, avg=790.63, stdev=130.37, samples=19 00:26:42.261 iops : min= 127, max= 244, avg=197.63, stdev=32.65, samples=19 00:26:42.261 lat (msec) : 20=0.54%, 50=9.46%, 100=75.19%, 250=14.81% 00:26:42.261 cpu : usr=32.97%, sys=1.70%, ctx=934, majf=0, minf=9 00:26:42.261 IO depths : 1=1.9%, 2=4.7%, 4=14.8%, 8=67.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:42.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.261 issued rwts: total=2019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.261 filename2: (groupid=0, jobs=1): err= 0: pid=98914: Mon Jul 15 18:52:14 2024 00:26:42.261 read: IOPS=225, BW=904KiB/s (926kB/s)(9048KiB/10010msec) 00:26:42.261 slat (usec): min=4, max=8044, avg=39.75, stdev=475.55 00:26:42.261 clat (msec): min=21, max=148, avg=70.56, stdev=21.53 00:26:42.261 lat (msec): min=21, max=148, avg=70.60, stdev=21.56 00:26:42.262 clat percentiles (msec): 00:26:42.262 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 51], 00:26:42.262 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:26:42.262 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 100], 95.00th=[ 108], 00:26:42.262 | 99.00th=[ 126], 99.50th=[ 134], 99.90th=[ 148], 99.95th=[ 148], 00:26:42.262 | 99.99th=[ 148] 00:26:42.262 bw ( KiB/s): min= 640, max= 1072, per=4.06%, avg=891.26, stdev=122.10, samples=19 00:26:42.262 iops : min= 160, max= 268, avg=222.79, stdev=30.52, samples=19 00:26:42.262 lat (msec) : 50=19.85%, 100=70.91%, 250=9.24% 00:26:42.262 cpu : usr=31.62%, sys=1.59%, ctx=863, majf=0, minf=9 00:26:42.262 IO depths : 1=0.9%, 2=1.9%, 4=8.5%, 8=76.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:42.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 issued rwts: total=2262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.262 filename2: (groupid=0, jobs=1): err= 0: pid=98915: Mon Jul 15 18:52:14 2024 00:26:42.262 read: IOPS=243, BW=972KiB/s (996kB/s)(9748KiB/10025msec) 00:26:42.262 slat (usec): min=6, max=7045, avg=15.20, stdev=155.16 00:26:42.262 clat (msec): min=29, max=183, avg=65.65, stdev=23.67 00:26:42.262 lat (msec): min=29, max=183, avg=65.67, stdev=23.67 00:26:42.262 clat percentiles (msec): 00:26:42.262 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 46], 00:26:42.262 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:26:42.262 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 97], 95.00th=[ 110], 00:26:42.262 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 184], 99.95th=[ 184], 00:26:42.262 | 99.99th=[ 184] 00:26:42.262 bw ( KiB/s): min= 496, max= 1328, per=4.43%, avg=972.40, stdev=203.41, samples=20 00:26:42.262 iops : min= 124, max= 332, avg=243.10, stdev=50.85, samples=20 00:26:42.262 lat (msec) : 50=32.50%, 100=59.42%, 250=8.08% 00:26:42.262 cpu : usr=37.68%, sys=1.92%, ctx=1065, majf=0, minf=9 00:26:42.262 IO depths : 1=0.7%, 2=1.6%, 4=8.8%, 8=76.0%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:42.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 issued rwts: total=2437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.262 filename2: (groupid=0, jobs=1): err= 0: pid=98916: Mon Jul 15 18:52:14 2024 00:26:42.262 read: IOPS=252, BW=1011KiB/s (1036kB/s)(9.91MiB/10037msec) 00:26:42.262 slat (usec): min=4, max=4016, avg=12.05, stdev=79.63 00:26:42.262 clat (msec): min=27, max=139, avg=63.18, stdev=19.15 00:26:42.262 lat (msec): min=27, max=139, avg=63.19, stdev=19.15 00:26:42.262 clat percentiles (msec): 00:26:42.262 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 47], 00:26:42.262 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 65], 00:26:42.262 | 70.00th=[ 71], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 99], 00:26:42.262 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 140], 99.95th=[ 140], 00:26:42.262 | 99.99th=[ 140] 00:26:42.262 bw ( KiB/s): min= 763, max= 1280, per=4.59%, avg=1008.55, stdev=152.61, samples=20 00:26:42.262 iops : min= 190, max= 320, avg=252.10, stdev=38.22, samples=20 00:26:42.262 lat (msec) : 50=30.81%, 100=65.29%, 250=3.90% 00:26:42.262 cpu : usr=35.49%, sys=1.64%, ctx=998, majf=0, minf=9 00:26:42.262 IO depths : 1=0.2%, 2=0.7%, 4=6.6%, 8=78.7%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:42.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 complete : 0=0.0%, 4=89.3%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 issued rwts: total=2538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.262 filename2: (groupid=0, jobs=1): err= 0: pid=98917: Mon Jul 15 18:52:14 2024 00:26:42.262 read: IOPS=244, BW=977KiB/s (1001kB/s)(9824KiB/10051msec) 00:26:42.262 slat (nsec): min=3363, max=91230, avg=10719.65, stdev=4675.10 00:26:42.262 clat (msec): min=16, max=184, avg=65.36, stdev=22.87 00:26:42.262 lat (msec): min=16, max=184, avg=65.37, stdev=22.87 00:26:42.262 clat percentiles (msec): 00:26:42.262 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 47], 00:26:42.262 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 00:26:42.262 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 102], 00:26:42.262 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 184], 00:26:42.262 | 99.99th=[ 184] 00:26:42.262 bw ( KiB/s): min= 512, max= 1248, per=4.44%, avg=976.10, stdev=181.79, samples=20 00:26:42.262 iops : min= 128, max= 312, avg=244.00, stdev=45.42, samples=20 00:26:42.262 lat (msec) : 20=0.57%, 50=27.04%, 100=67.14%, 250=5.25% 00:26:42.262 cpu : usr=34.67%, sys=1.59%, ctx=1049, majf=0, minf=9 00:26:42.262 IO depths : 1=0.5%, 2=1.4%, 4=8.3%, 8=76.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:42.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 issued rwts: total=2456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.262 filename2: (groupid=0, jobs=1): err= 0: pid=98918: Mon Jul 15 18:52:14 2024 00:26:42.262 read: IOPS=231, BW=926KiB/s (949kB/s)(9276KiB/10014msec) 00:26:42.262 slat (nsec): min=4719, max=49272, avg=10620.88, stdev=4475.19 00:26:42.262 clat (msec): min=30, max=219, avg=69.00, stdev=26.67 00:26:42.262 lat (msec): min=30, max=219, avg=69.01, stdev=26.67 00:26:42.262 clat percentiles (msec): 00:26:42.262 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 47], 00:26:42.262 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 71], 00:26:42.262 | 70.00th=[ 78], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 112], 00:26:42.262 | 99.00th=[ 174], 99.50th=[ 184], 99.90th=[ 220], 99.95th=[ 220], 00:26:42.262 | 99.99th=[ 220] 00:26:42.262 bw ( KiB/s): min= 512, max= 1248, per=4.20%, avg=923.50, stdev=216.24, samples=20 00:26:42.262 iops : min= 128, max= 312, avg=230.85, stdev=54.06, samples=20 00:26:42.262 lat (msec) : 50=28.76%, 100=61.49%, 250=9.75% 00:26:42.262 cpu : usr=38.28%, sys=2.04%, ctx=1457, majf=0, minf=9 00:26:42.262 IO depths : 1=0.4%, 2=0.9%, 4=7.6%, 8=77.5%, 16=13.6%, 32=0.0%, >=64=0.0% 00:26:42.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 complete : 0=0.0%, 4=89.1%, 8=6.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.262 filename2: (groupid=0, jobs=1): err= 0: pid=98919: Mon Jul 15 18:52:14 2024 00:26:42.262 read: IOPS=206, BW=827KiB/s (847kB/s)(8276KiB/10009msec) 00:26:42.262 slat (usec): min=4, max=8004, avg=15.27, stdev=175.79 00:26:42.262 clat (msec): min=25, max=169, avg=77.22, stdev=21.44 00:26:42.262 lat (msec): min=25, max=169, avg=77.23, stdev=21.45 00:26:42.262 clat percentiles (msec): 00:26:42.262 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:26:42.262 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:26:42.262 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 117], 00:26:42.262 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 169], 00:26:42.262 | 99.99th=[ 169] 00:26:42.262 bw ( KiB/s): min= 640, max= 1072, per=3.76%, avg=825.60, stdev=129.35, samples=20 00:26:42.262 iops : min= 160, max= 268, avg=206.40, stdev=32.34, samples=20 00:26:42.262 lat (msec) : 50=10.20%, 100=73.90%, 250=15.90% 00:26:42.262 cpu : usr=34.70%, sys=1.63%, ctx=961, majf=0, minf=9 00:26:42.262 IO depths : 1=2.8%, 2=6.1%, 4=15.8%, 8=65.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:42.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.262 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.262 00:26:42.262 Run status group 0 (all jobs): 00:26:42.262 READ: bw=21.5MiB/s (22.5MB/s), 793KiB/s-1046KiB/s (812kB/s-1071kB/s), io=216MiB (226MB), run=10001-10051msec 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:42.262 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 bdev_null0 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 [2024-07-15 18:52:14.906977] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 bdev_null1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.263 { 00:26:42.263 "params": { 00:26:42.263 "name": "Nvme$subsystem", 00:26:42.263 "trtype": "$TEST_TRANSPORT", 00:26:42.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.263 "adrfam": "ipv4", 00:26:42.263 "trsvcid": "$NVMF_PORT", 00:26:42.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.263 "hdgst": ${hdgst:-false}, 00:26:42.263 "ddgst": ${ddgst:-false} 00:26:42.263 }, 00:26:42.263 "method": "bdev_nvme_attach_controller" 00:26:42.263 } 00:26:42.263 EOF 00:26:42.263 )") 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.263 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.263 { 00:26:42.263 "params": { 00:26:42.263 "name": "Nvme$subsystem", 00:26:42.263 "trtype": "$TEST_TRANSPORT", 00:26:42.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.264 "adrfam": "ipv4", 00:26:42.264 "trsvcid": "$NVMF_PORT", 00:26:42.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.264 "hdgst": ${hdgst:-false}, 00:26:42.264 "ddgst": ${ddgst:-false} 00:26:42.264 }, 00:26:42.264 "method": "bdev_nvme_attach_controller" 00:26:42.264 } 00:26:42.264 EOF 00:26:42.264 )") 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:42.264 "params": { 00:26:42.264 "name": "Nvme0", 00:26:42.264 "trtype": "tcp", 00:26:42.264 "traddr": "10.0.0.2", 00:26:42.264 "adrfam": "ipv4", 00:26:42.264 "trsvcid": "4420", 00:26:42.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:42.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:42.264 "hdgst": false, 00:26:42.264 "ddgst": false 00:26:42.264 }, 00:26:42.264 "method": "bdev_nvme_attach_controller" 00:26:42.264 },{ 00:26:42.264 "params": { 00:26:42.264 "name": "Nvme1", 00:26:42.264 "trtype": "tcp", 00:26:42.264 "traddr": "10.0.0.2", 00:26:42.264 "adrfam": "ipv4", 00:26:42.264 "trsvcid": "4420", 00:26:42.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:42.264 "hdgst": false, 00:26:42.264 "ddgst": false 00:26:42.264 }, 00:26:42.264 "method": "bdev_nvme_attach_controller" 00:26:42.264 }' 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:42.264 18:52:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:42.264 18:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:42.264 18:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:42.264 18:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:42.264 18:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.264 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:42.264 ... 00:26:42.264 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:42.264 ... 00:26:42.264 fio-3.35 00:26:42.264 Starting 4 threads 00:26:46.467 00:26:46.468 filename0: (groupid=0, jobs=1): err= 0: pid=99045: Mon Jul 15 18:52:20 2024 00:26:46.468 read: IOPS=2196, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5003msec) 00:26:46.468 slat (nsec): min=4327, max=88306, avg=10592.52, stdev=4055.91 00:26:46.468 clat (usec): min=1731, max=5484, avg=3600.68, stdev=293.62 00:26:46.468 lat (usec): min=1738, max=5501, avg=3611.27, stdev=293.58 00:26:46.468 clat percentiles (usec): 00:26:46.468 | 1.00th=[ 2638], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3458], 00:26:46.468 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:46.468 | 70.00th=[ 3720], 80.00th=[ 3752], 90.00th=[ 3818], 95.00th=[ 4047], 00:26:46.468 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 4883], 99.95th=[ 4948], 00:26:46.468 | 99.99th=[ 5014] 00:26:46.468 bw ( KiB/s): min=17024, max=18064, per=24.99%, avg=17559.44, stdev=331.64, samples=9 00:26:46.468 iops : min= 2128, max= 2258, avg=2194.89, stdev=41.44, samples=9 00:26:46.468 lat (msec) : 2=0.05%, 4=94.54%, 10=5.42% 00:26:46.468 cpu : usr=92.42%, sys=6.60%, ctx=12, majf=0, minf=9 00:26:46.468 IO depths : 1=5.7%, 2=16.3%, 4=58.7%, 8=19.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 issued rwts: total=10987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:46.468 filename0: (groupid=0, jobs=1): err= 0: pid=99046: Mon Jul 15 18:52:20 2024 00:26:46.468 read: IOPS=2195, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5002msec) 00:26:46.468 slat (usec): min=4, max=153, avg=12.74, stdev= 4.81 00:26:46.468 clat (usec): min=1128, max=7325, avg=3583.36, stdev=386.20 00:26:46.468 lat (usec): min=1140, max=7338, avg=3596.10, stdev=386.40 00:26:46.468 clat percentiles (usec): 00:26:46.468 | 1.00th=[ 2507], 5.00th=[ 3130], 10.00th=[ 3326], 20.00th=[ 3425], 00:26:46.468 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3654], 00:26:46.468 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3851], 00:26:46.468 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 6259], 99.95th=[ 6390], 00:26:46.468 | 99.99th=[ 6718] 00:26:46.468 bw ( KiB/s): min=17040, max=18048, per=24.98%, avg=17554.11, stdev=326.35, samples=9 00:26:46.468 iops : min= 2130, max= 2256, avg=2194.22, stdev=40.77, samples=9 00:26:46.468 lat (msec) : 2=0.23%, 4=96.32%, 10=3.45% 00:26:46.468 cpu : usr=92.04%, sys=6.96%, ctx=10, majf=0, minf=9 00:26:46.468 IO depths : 1=7.0%, 2=25.0%, 4=50.0%, 8=18.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 issued rwts: total=10984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:46.468 filename1: (groupid=0, jobs=1): err= 0: pid=99047: Mon Jul 15 18:52:20 2024 00:26:46.468 read: IOPS=2192, BW=17.1MiB/s (18.0MB/s)(85.7MiB/5003msec) 00:26:46.468 slat (nsec): min=3329, max=58132, avg=13249.90, stdev=4551.34 00:26:46.468 clat (usec): min=1748, max=5629, avg=3594.30, stdev=266.99 00:26:46.468 lat (usec): min=1755, max=5642, avg=3607.55, stdev=267.28 00:26:46.468 clat percentiles (usec): 00:26:46.468 | 1.00th=[ 2769], 5.00th=[ 3195], 10.00th=[ 3326], 20.00th=[ 3458], 00:26:46.468 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3654], 00:26:46.468 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3916], 00:26:46.468 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5407], 99.95th=[ 5538], 00:26:46.468 | 99.99th=[ 5604] 00:26:46.468 bw ( KiB/s): min=17024, max=18096, per=24.93%, avg=17521.78, stdev=344.41, samples=9 00:26:46.468 iops : min= 2128, max= 2262, avg=2190.22, stdev=43.05, samples=9 00:26:46.468 lat (msec) : 2=0.03%, 4=95.78%, 10=4.19% 00:26:46.468 cpu : usr=92.68%, sys=6.26%, ctx=7, majf=0, minf=0 00:26:46.468 IO depths : 1=6.1%, 2=16.7%, 4=58.3%, 8=18.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 issued rwts: total=10968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:46.468 filename1: (groupid=0, jobs=1): err= 0: pid=99048: Mon Jul 15 18:52:20 2024 00:26:46.468 read: IOPS=2200, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5002msec) 00:26:46.468 slat (nsec): min=5954, max=32508, avg=8159.08, stdev=2767.90 00:26:46.468 clat (usec): min=1084, max=4973, avg=3595.16, stdev=225.39 00:26:46.468 lat (usec): min=1097, max=4997, avg=3603.32, stdev=225.61 00:26:46.468 clat percentiles (usec): 00:26:46.468 | 1.00th=[ 3097], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3458], 00:26:46.468 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:46.468 | 70.00th=[ 3720], 80.00th=[ 3752], 90.00th=[ 3818], 95.00th=[ 3851], 00:26:46.468 | 99.00th=[ 4080], 99.50th=[ 4146], 99.90th=[ 4555], 99.95th=[ 4948], 00:26:46.468 | 99.99th=[ 4948] 00:26:46.468 bw ( KiB/s): min=16912, max=18176, per=25.06%, avg=17607.11, stdev=420.29, samples=9 00:26:46.468 iops : min= 2114, max= 2272, avg=2200.89, stdev=52.54, samples=9 00:26:46.468 lat (msec) : 2=0.29%, 4=98.42%, 10=1.29% 00:26:46.468 cpu : usr=92.22%, sys=6.78%, ctx=8, majf=0, minf=0 00:26:46.468 IO depths : 1=7.8%, 2=25.0%, 4=50.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.468 issued rwts: total=11008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:46.468 00:26:46.468 Run status group 0 (all jobs): 00:26:46.468 READ: bw=68.6MiB/s (72.0MB/s), 17.1MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=343MiB (360MB), run=5002-5003msec 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:46.725 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.726 18:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 ************************************ 00:26:46.726 END TEST fio_dif_rand_params 00:26:46.726 ************************************ 00:26:46.726 18:52:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.726 00:26:46.726 real 0m23.631s 00:26:46.726 user 2m4.128s 00:26:46.726 sys 0m7.901s 00:26:46.726 18:52:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.726 18:52:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 18:52:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:46.726 18:52:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:46.726 18:52:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:46.726 18:52:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.726 18:52:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 ************************************ 00:26:46.726 START TEST fio_dif_digest 00:26:46.726 ************************************ 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 bdev_null0 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.726 [2024-07-15 18:52:21.078671] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.726 { 00:26:46.726 "params": { 00:26:46.726 "name": "Nvme$subsystem", 00:26:46.726 "trtype": "$TEST_TRANSPORT", 00:26:46.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.726 "adrfam": "ipv4", 00:26:46.726 "trsvcid": "$NVMF_PORT", 00:26:46.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.726 "hdgst": ${hdgst:-false}, 00:26:46.726 "ddgst": ${ddgst:-false} 00:26:46.726 }, 00:26:46.726 "method": "bdev_nvme_attach_controller" 00:26:46.726 } 00:26:46.726 EOF 00:26:46.726 )") 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:46.726 "params": { 00:26:46.726 "name": "Nvme0", 00:26:46.726 "trtype": "tcp", 00:26:46.726 "traddr": "10.0.0.2", 00:26:46.726 "adrfam": "ipv4", 00:26:46.726 "trsvcid": "4420", 00:26:46.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:46.726 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:46.726 "hdgst": true, 00:26:46.726 "ddgst": true 00:26:46.726 }, 00:26:46.726 "method": "bdev_nvme_attach_controller" 00:26:46.726 }' 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:46.726 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.983 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:46.983 ... 00:26:46.983 fio-3.35 00:26:46.983 Starting 3 threads 00:26:59.188 00:26:59.188 filename0: (groupid=0, jobs=1): err= 0: pid=99154: Mon Jul 15 18:52:31 2024 00:26:59.188 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(303MiB/10005msec) 00:26:59.188 slat (nsec): min=6293, max=38894, avg=12517.38, stdev=3411.34 00:26:59.188 clat (usec): min=5845, max=22039, avg=12379.43, stdev=1172.68 00:26:59.188 lat (usec): min=5868, max=22046, avg=12391.95, stdev=1172.79 00:26:59.188 clat percentiles (usec): 00:26:59.188 | 1.00th=[ 7767], 5.00th=[10683], 10.00th=[11207], 20.00th=[11600], 00:26:59.188 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:26:59.188 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:26:59.188 | 99.00th=[14615], 99.50th=[15008], 99.90th=[19268], 99.95th=[20579], 00:26:59.188 | 99.99th=[22152] 00:26:59.188 bw ( KiB/s): min=29184, max=34048, per=35.60%, avg=30989.47, stdev=1303.88, samples=19 00:26:59.188 iops : min= 228, max= 266, avg=242.11, stdev=10.19, samples=19 00:26:59.188 lat (msec) : 10=2.44%, 20=97.48%, 50=0.08% 00:26:59.188 cpu : usr=90.79%, sys=8.07%, ctx=47, majf=0, minf=0 00:26:59.188 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.188 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:59.188 filename0: (groupid=0, jobs=1): err= 0: pid=99155: Mon Jul 15 18:52:31 2024 00:26:59.188 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(323MiB/10008msec) 00:26:59.188 slat (nsec): min=4556, max=54021, avg=14126.10, stdev=3691.92 00:26:59.188 clat (usec): min=5633, max=52413, avg=11619.50, stdev=2475.34 00:26:59.188 lat (usec): min=5638, max=52428, avg=11633.63, stdev=2475.37 00:26:59.188 clat percentiles (usec): 00:26:59.188 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:26:59.188 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:26:59.188 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:26:59.188 | 99.00th=[13173], 99.50th=[15401], 99.90th=[51643], 99.95th=[52167], 00:26:59.188 | 99.99th=[52167] 00:26:59.188 bw ( KiB/s): min=31488, max=34560, per=38.12%, avg=33185.68, stdev=854.23, samples=19 00:26:59.188 iops : min= 246, max= 270, avg=259.26, stdev= 6.67, samples=19 00:26:59.188 lat (msec) : 10=1.90%, 20=97.75%, 100=0.35% 00:26:59.188 cpu : usr=90.46%, sys=8.33%, ctx=6, majf=0, minf=0 00:26:59.188 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.188 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:59.188 filename0: (groupid=0, jobs=1): err= 0: pid=99156: Mon Jul 15 18:52:31 2024 00:26:59.188 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(229MiB/10046msec) 00:26:59.188 slat (nsec): min=6319, max=56846, avg=13173.46, stdev=4796.28 00:26:59.188 clat (usec): min=9152, max=49521, avg=16415.19, stdev=1599.84 00:26:59.188 lat (usec): min=9159, max=49534, avg=16428.36, stdev=1600.19 00:26:59.188 clat percentiles (usec): 00:26:59.188 | 1.00th=[10028], 5.00th=[15008], 10.00th=[15401], 20.00th=[15795], 00:26:59.188 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16450], 60.00th=[16712], 00:26:59.188 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17695], 00:26:59.188 | 99.00th=[18220], 99.50th=[18482], 99.90th=[47449], 99.95th=[49546], 00:26:59.188 | 99.99th=[49546] 00:26:59.188 bw ( KiB/s): min=22528, max=25344, per=26.89%, avg=23411.20, stdev=745.09, samples=20 00:26:59.188 iops : min= 176, max= 198, avg=182.90, stdev= 5.82, samples=20 00:26:59.188 lat (msec) : 10=0.98%, 20=98.74%, 50=0.27% 00:26:59.188 cpu : usr=91.03%, sys=7.94%, ctx=7, majf=0, minf=0 00:26:59.188 IO depths : 1=12.5%, 2=87.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.188 issued rwts: total=1831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:59.188 00:26:59.188 Run status group 0 (all jobs): 00:26:59.188 READ: bw=85.0MiB/s (89.1MB/s), 22.8MiB/s-32.2MiB/s (23.9MB/s-33.8MB/s), io=854MiB (895MB), run=10005-10046msec 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.188 ************************************ 00:26:59.188 END TEST fio_dif_digest 00:26:59.188 ************************************ 00:26:59.188 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.188 00:26:59.188 real 0m11.032s 00:26:59.188 user 0m27.936s 00:26:59.188 sys 0m2.737s 00:26:59.189 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.189 18:52:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:59.189 18:52:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:59.189 18:52:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.189 rmmod nvme_tcp 00:26:59.189 rmmod nvme_fabrics 00:26:59.189 rmmod nvme_keyring 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 98393 ']' 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 98393 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 98393 ']' 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 98393 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98393 00:26:59.189 killing process with pid 98393 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98393' 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@967 -- # kill 98393 00:26:59.189 18:52:32 nvmf_dif -- common/autotest_common.sh@972 -- # wait 98393 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:59.189 18:52:32 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:59.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:59.189 Waiting for block devices as requested 00:26:59.189 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:59.189 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:59.189 18:52:33 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:59.189 18:52:33 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:59.189 18:52:33 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.189 18:52:33 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:59.189 18:52:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.189 18:52:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:59.189 18:52:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.189 18:52:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:59.189 00:26:59.189 real 1m0.210s 00:26:59.189 user 3m47.023s 00:26:59.189 sys 0m20.748s 00:26:59.189 18:52:33 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.189 18:52:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:59.189 ************************************ 00:26:59.189 END TEST nvmf_dif 00:26:59.189 ************************************ 00:26:59.189 18:52:33 -- common/autotest_common.sh@1142 -- # return 0 00:26:59.189 18:52:33 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:59.189 18:52:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:59.189 18:52:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.189 18:52:33 -- common/autotest_common.sh@10 -- # set +x 00:26:59.189 ************************************ 00:26:59.189 START TEST nvmf_abort_qd_sizes 00:26:59.189 ************************************ 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:59.189 * Looking for test storage... 00:26:59.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:59.189 Cannot find device "nvmf_tgt_br" 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:59.189 Cannot find device "nvmf_tgt_br2" 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:59.189 Cannot find device "nvmf_tgt_br" 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:26:59.189 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:59.189 Cannot find device "nvmf_tgt_br2" 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:59.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:59.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:59.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:26:59.190 00:26:59.190 --- 10.0.0.2 ping statistics --- 00:26:59.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.190 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:59.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:59.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:26:59.190 00:26:59.190 --- 10.0.0.3 ping statistics --- 00:26:59.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.190 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:59.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:26:59.190 00:26:59.190 --- 10.0.0.1 ping statistics --- 00:26:59.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.190 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:59.190 18:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:00.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:00.125 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:00.125 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:00.383 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.383 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:00.383 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:00.383 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.383 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:00.383 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99750 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99750 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99750 ']' 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:00.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:00.384 18:52:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:00.384 [2024-07-15 18:52:34.751851] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:00.384 [2024-07-15 18:52:34.751936] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.642 [2024-07-15 18:52:34.894863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.642 [2024-07-15 18:52:35.019610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.642 [2024-07-15 18:52:35.019684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.642 [2024-07-15 18:52:35.019699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.642 [2024-07-15 18:52:35.019712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.642 [2024-07-15 18:52:35.019722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.642 [2024-07-15 18:52:35.019940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.642 [2024-07-15 18:52:35.020088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.642 [2024-07-15 18:52:35.020771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.642 [2024-07-15 18:52:35.020774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:27:01.604 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.605 18:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 ************************************ 00:27:01.605 START TEST spdk_target_abort 00:27:01.605 ************************************ 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 spdk_targetn1 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 [2024-07-15 18:52:35.961476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.605 18:52:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 [2024-07-15 18:52:36.001749] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:01.605 18:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:04.884 Initializing NVMe Controllers 00:27:04.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:04.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:04.884 Initialization complete. Launching workers. 00:27:04.884 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13023, failed: 0 00:27:04.884 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1083, failed to submit 11940 00:27:04.884 success 771, unsuccess 312, failed 0 00:27:04.884 18:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:04.884 18:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.217 Initializing NVMe Controllers 00:27:08.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:08.217 Initialization complete. Launching workers. 00:27:08.217 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5980, failed: 0 00:27:08.217 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1262, failed to submit 4718 00:27:08.217 success 248, unsuccess 1014, failed 0 00:27:08.217 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:08.217 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.624 Initializing NVMe Controllers 00:27:11.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:11.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:11.624 Initialization complete. Launching workers. 00:27:11.624 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31249, failed: 0 00:27:11.624 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2533, failed to submit 28716 00:27:11.624 success 508, unsuccess 2025, failed 0 00:27:11.624 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:11.624 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.624 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.624 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.624 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:11.624 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.624 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99750 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99750 ']' 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99750 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99750 00:27:12.558 killing process with pid 99750 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99750' 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99750 00:27:12.558 18:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99750 00:27:12.817 ************************************ 00:27:12.817 END TEST spdk_target_abort 00:27:12.817 ************************************ 00:27:12.817 00:27:12.817 real 0m11.202s 00:27:12.817 user 0m44.089s 00:27:12.817 sys 0m2.253s 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.817 18:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:12.817 18:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:12.817 18:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:12.817 18:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.817 18:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:12.817 ************************************ 00:27:12.817 START TEST kernel_target_abort 00:27:12.817 ************************************ 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:12.817 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:13.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:13.335 Waiting for block devices as requested 00:27:13.335 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.335 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:13.335 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:13.593 No valid GPT data, bailing 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:13.593 No valid GPT data, bailing 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:27:13.593 18:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:13.593 No valid GPT data, bailing 00:27:13.593 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:13.593 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:13.593 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:13.593 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:27:13.593 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:13.593 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:13.593 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:27:13.594 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:27:13.594 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:13.594 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:13.594 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:27:13.594 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:13.594 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:13.594 No valid GPT data, bailing 00:27:13.594 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 --hostid=6595a4fd-62c0-4385-bb15-2b50828eda08 -a 10.0.0.1 -t tcp -s 4420 00:27:13.853 00:27:13.853 Discovery Log Number of Records 2, Generation counter 2 00:27:13.853 =====Discovery Log Entry 0====== 00:27:13.853 trtype: tcp 00:27:13.853 adrfam: ipv4 00:27:13.853 subtype: current discovery subsystem 00:27:13.853 treq: not specified, sq flow control disable supported 00:27:13.853 portid: 1 00:27:13.853 trsvcid: 4420 00:27:13.853 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:13.853 traddr: 10.0.0.1 00:27:13.853 eflags: none 00:27:13.853 sectype: none 00:27:13.853 =====Discovery Log Entry 1====== 00:27:13.853 trtype: tcp 00:27:13.853 adrfam: ipv4 00:27:13.853 subtype: nvme subsystem 00:27:13.853 treq: not specified, sq flow control disable supported 00:27:13.853 portid: 1 00:27:13.853 trsvcid: 4420 00:27:13.853 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:13.853 traddr: 10.0.0.1 00:27:13.853 eflags: none 00:27:13.853 sectype: none 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:13.853 18:52:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:17.134 Initializing NVMe Controllers 00:27:17.134 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:17.134 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:17.134 Initialization complete. Launching workers. 00:27:17.134 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41452, failed: 0 00:27:17.134 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41452, failed to submit 0 00:27:17.134 success 0, unsuccess 41452, failed 0 00:27:17.134 18:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:17.134 18:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:20.409 Initializing NVMe Controllers 00:27:20.409 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:20.409 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:20.409 Initialization complete. Launching workers. 00:27:20.409 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85808, failed: 0 00:27:20.409 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37960, failed to submit 47848 00:27:20.409 success 0, unsuccess 37960, failed 0 00:27:20.409 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:20.409 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.700 Initializing NVMe Controllers 00:27:23.700 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.700 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:23.700 Initialization complete. Launching workers. 00:27:23.700 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97794, failed: 0 00:27:23.700 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24514, failed to submit 73280 00:27:23.700 success 0, unsuccess 24514, failed 0 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:23.700 18:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:24.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:26.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:26.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:26.846 00:27:26.846 real 0m14.033s 00:27:26.846 user 0m6.529s 00:27:26.846 sys 0m5.068s 00:27:26.846 18:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.846 18:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:26.846 ************************************ 00:27:26.846 END TEST kernel_target_abort 00:27:26.846 ************************************ 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.846 rmmod nvme_tcp 00:27:26.846 rmmod nvme_fabrics 00:27:26.846 rmmod nvme_keyring 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99750 ']' 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99750 00:27:26.846 Process with pid 99750 is not found 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99750 ']' 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99750 00:27:26.846 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99750) - No such process 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99750 is not found' 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:26.846 18:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:27.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:27.411 Waiting for block devices as requested 00:27:27.411 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:27.668 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:27.668 ************************************ 00:27:27.668 END TEST nvmf_abort_qd_sizes 00:27:27.668 ************************************ 00:27:27.668 00:27:27.668 real 0m28.914s 00:27:27.668 user 0m51.967s 00:27:27.668 sys 0m8.959s 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.668 18:53:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:27.668 18:53:02 -- common/autotest_common.sh@1142 -- # return 0 00:27:27.668 18:53:02 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:27.668 18:53:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:27.668 18:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.668 18:53:02 -- common/autotest_common.sh@10 -- # set +x 00:27:27.668 ************************************ 00:27:27.668 START TEST keyring_file 00:27:27.668 ************************************ 00:27:27.668 18:53:02 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:27.924 * Looking for test storage... 00:27:27.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:27:27.924 18:53:02 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:27:27.924 18:53:02 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.924 18:53:02 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:27.924 18:53:02 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.924 18:53:02 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.924 18:53:02 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.924 18:53:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.924 18:53:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.924 18:53:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.924 18:53:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:27.924 18:53:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GgfXKPjJWg 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GgfXKPjJWg 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GgfXKPjJWg 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GgfXKPjJWg 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eDWAgfVc5O 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.925 18:53:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eDWAgfVc5O 00:27:27.925 18:53:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eDWAgfVc5O 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eDWAgfVc5O 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=100634 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:27.925 18:53:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100634 00:27:27.925 18:53:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100634 ']' 00:27:27.925 18:53:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.925 18:53:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.925 18:53:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.925 18:53:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.925 18:53:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.182 [2024-07-15 18:53:02.483698] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:28.182 [2024-07-15 18:53:02.484375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100634 ] 00:27:28.182 [2024-07-15 18:53:02.641919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.438 [2024-07-15 18:53:02.760308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.002 18:53:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.002 18:53:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:29.002 18:53:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:29.002 18:53:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.002 18:53:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.002 [2024-07-15 18:53:03.458525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.002 null0 00:27:29.269 [2024-07-15 18:53:03.490473] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:29.269 [2024-07-15 18:53:03.490731] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:29.269 [2024-07-15 18:53:03.498477] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:29.269 18:53:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.269 18:53:03 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:29.269 18:53:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:29.269 18:53:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:29.269 18:53:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:29.269 18:53:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.269 18:53:03 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.270 [2024-07-15 18:53:03.510472] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:29.270 2024/07/15 18:53:03 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:27:29.270 request: 00:27:29.270 { 00:27:29.270 "method": "nvmf_subsystem_add_listener", 00:27:29.270 "params": { 00:27:29.270 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.270 "secure_channel": false, 00:27:29.270 "listen_address": { 00:27:29.270 "trtype": "tcp", 00:27:29.270 "traddr": "127.0.0.1", 00:27:29.270 "trsvcid": "4420" 00:27:29.270 } 00:27:29.270 } 00:27:29.270 } 00:27:29.270 Got JSON-RPC error response 00:27:29.270 GoRPCClient: error on JSON-RPC call 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:29.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.270 18:53:03 keyring_file -- keyring/file.sh@46 -- # bperfpid=100668 00:27:29.270 18:53:03 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100668 /var/tmp/bperf.sock 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100668 ']' 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.270 18:53:03 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.270 18:53:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.270 [2024-07-15 18:53:03.573709] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:29.270 [2024-07-15 18:53:03.573821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100668 ] 00:27:29.270 [2024-07-15 18:53:03.716322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.549 [2024-07-15 18:53:03.820412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.115 18:53:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.115 18:53:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:30.115 18:53:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:30.115 18:53:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:30.373 18:53:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eDWAgfVc5O 00:27:30.373 18:53:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eDWAgfVc5O 00:27:30.632 18:53:04 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:30.632 18:53:04 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:30.632 18:53:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.632 18:53:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.632 18:53:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.891 18:53:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.GgfXKPjJWg == \/\t\m\p\/\t\m\p\.\G\g\f\X\K\P\j\J\W\g ]] 00:27:30.891 18:53:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:30.891 18:53:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:30.891 18:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.891 18:53:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.891 18:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.151 18:53:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eDWAgfVc5O == \/\t\m\p\/\t\m\p\.\e\D\W\A\g\f\V\c\5\O ]] 00:27:31.151 18:53:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:31.151 18:53:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.151 18:53:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:31.151 18:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:31.151 18:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.151 18:53:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.438 18:53:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:31.438 18:53:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:31.438 18:53:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:31.438 18:53:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.438 18:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.438 18:53:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.438 18:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.438 18:53:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:31.438 18:53:05 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:31.438 18:53:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:31.700 [2024-07-15 18:53:06.059922] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:31.700 nvme0n1 00:27:31.700 18:53:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:31.700 18:53:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:31.700 18:53:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.700 18:53:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:31.700 18:53:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.700 18:53:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.958 18:53:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:31.958 18:53:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:31.958 18:53:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:31.958 18:53:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.958 18:53:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.958 18:53:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.958 18:53:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.216 18:53:06 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:32.216 18:53:06 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.474 Running I/O for 1 seconds... 00:27:33.408 00:27:33.408 Latency(us) 00:27:33.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.408 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:33.408 nvme0n1 : 1.00 14099.83 55.08 0.00 0.00 9054.70 3495.25 13107.20 00:27:33.408 =================================================================================================================== 00:27:33.408 Total : 14099.83 55.08 0.00 0.00 9054.70 3495.25 13107.20 00:27:33.408 0 00:27:33.408 18:53:07 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:33.408 18:53:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:33.666 18:53:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:33.666 18:53:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:33.666 18:53:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.666 18:53:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.666 18:53:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:33.666 18:53:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.924 18:53:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:33.924 18:53:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:33.924 18:53:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:33.924 18:53:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.924 18:53:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.924 18:53:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.924 18:53:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.182 18:53:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:34.182 18:53:08 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.182 18:53:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:34.182 18:53:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.182 18:53:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:34.182 18:53:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.182 18:53:08 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:34.182 18:53:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.182 18:53:08 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.182 18:53:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.441 [2024-07-15 18:53:08.774672] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:34.441 [2024-07-15 18:53:08.774880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6f30 (107): Transport endpoint is not connected 00:27:34.441 [2024-07-15 18:53:08.775867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6f30 (9): Bad file descriptor 00:27:34.441 [2024-07-15 18:53:08.776864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.441 [2024-07-15 18:53:08.776883] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:34.441 [2024-07-15 18:53:08.776893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.441 2024/07/15 18:53:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:34.441 request: 00:27:34.441 { 00:27:34.441 "method": "bdev_nvme_attach_controller", 00:27:34.441 "params": { 00:27:34.441 "name": "nvme0", 00:27:34.441 "trtype": "tcp", 00:27:34.441 "traddr": "127.0.0.1", 00:27:34.441 "adrfam": "ipv4", 00:27:34.441 "trsvcid": "4420", 00:27:34.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:34.441 "prchk_reftag": false, 00:27:34.441 "prchk_guard": false, 00:27:34.441 "hdgst": false, 00:27:34.441 "ddgst": false, 00:27:34.441 "psk": "key1" 00:27:34.441 } 00:27:34.441 } 00:27:34.441 Got JSON-RPC error response 00:27:34.441 GoRPCClient: error on JSON-RPC call 00:27:34.441 18:53:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:34.441 18:53:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.441 18:53:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.441 18:53:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.441 18:53:08 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:34.441 18:53:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.441 18:53:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:34.441 18:53:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.441 18:53:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.441 18:53:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.700 18:53:09 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:34.700 18:53:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:34.700 18:53:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:34.700 18:53:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.700 18:53:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.700 18:53:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.700 18:53:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.985 18:53:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:34.985 18:53:09 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:34.985 18:53:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:35.243 18:53:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:35.243 18:53:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:35.502 18:53:09 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:35.502 18:53:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:35.502 18:53:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.760 18:53:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:35.760 18:53:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.GgfXKPjJWg 00:27:35.760 18:53:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:35.760 18:53:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:35.760 18:53:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:35.760 18:53:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:35.760 18:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.760 18:53:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:35.760 18:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.760 18:53:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:35.760 18:53:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:35.760 [2024-07-15 18:53:10.226310] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GgfXKPjJWg': 0100660 00:27:35.760 [2024-07-15 18:53:10.226358] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:35.760 2024/07/15 18:53:10 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.GgfXKPjJWg], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:27:35.760 request: 00:27:35.760 { 00:27:35.760 "method": "keyring_file_add_key", 00:27:35.760 "params": { 00:27:35.760 "name": "key0", 00:27:35.760 "path": "/tmp/tmp.GgfXKPjJWg" 00:27:35.760 } 00:27:35.760 } 00:27:35.760 Got JSON-RPC error response 00:27:35.760 GoRPCClient: error on JSON-RPC call 00:27:36.017 18:53:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:36.017 18:53:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.017 18:53:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.017 18:53:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.017 18:53:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.GgfXKPjJWg 00:27:36.017 18:53:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:36.018 18:53:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgfXKPjJWg 00:27:36.275 18:53:10 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.GgfXKPjJWg 00:27:36.275 18:53:10 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:36.275 18:53:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.275 18:53:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.275 18:53:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.275 18:53:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.275 18:53:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.534 18:53:10 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:36.534 18:53:10 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.534 18:53:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:36.534 18:53:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.534 18:53:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:36.534 18:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.534 18:53:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:36.534 18:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.534 18:53:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.534 18:53:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.793 [2024-07-15 18:53:11.058478] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GgfXKPjJWg': No such file or directory 00:27:36.793 [2024-07-15 18:53:11.058523] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:36.793 [2024-07-15 18:53:11.058550] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:36.793 [2024-07-15 18:53:11.058559] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:36.793 [2024-07-15 18:53:11.058568] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:36.793 2024/07/15 18:53:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:27:36.793 request: 00:27:36.793 { 00:27:36.793 "method": "bdev_nvme_attach_controller", 00:27:36.793 "params": { 00:27:36.793 "name": "nvme0", 00:27:36.793 "trtype": "tcp", 00:27:36.793 "traddr": "127.0.0.1", 00:27:36.793 "adrfam": "ipv4", 00:27:36.793 "trsvcid": "4420", 00:27:36.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.793 "prchk_reftag": false, 00:27:36.793 "prchk_guard": false, 00:27:36.793 "hdgst": false, 00:27:36.793 "ddgst": false, 00:27:36.793 "psk": "key0" 00:27:36.793 } 00:27:36.793 } 00:27:36.793 Got JSON-RPC error response 00:27:36.793 GoRPCClient: error on JSON-RPC call 00:27:36.793 18:53:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:36.793 18:53:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.793 18:53:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.793 18:53:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.793 18:53:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:36.793 18:53:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:36.793 18:53:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:36.793 18:53:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:36.793 18:53:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:36.793 18:53:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:36.793 18:53:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:36.793 18:53:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:36.793 18:53:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AYE9LhFEOw 00:27:37.052 18:53:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:37.052 18:53:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:37.052 18:53:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:37.052 18:53:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:37.052 18:53:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:37.052 18:53:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:37.052 18:53:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:37.052 18:53:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AYE9LhFEOw 00:27:37.052 18:53:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AYE9LhFEOw 00:27:37.052 18:53:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.AYE9LhFEOw 00:27:37.052 18:53:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AYE9LhFEOw 00:27:37.052 18:53:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AYE9LhFEOw 00:27:37.052 18:53:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.052 18:53:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.619 nvme0n1 00:27:37.619 18:53:11 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:37.619 18:53:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.619 18:53:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.619 18:53:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.619 18:53:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.619 18:53:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.619 18:53:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:37.619 18:53:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:37.619 18:53:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:38.186 18:53:12 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:38.186 18:53:12 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.186 18:53:12 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:38.186 18:53:12 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.186 18:53:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.444 18:53:12 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:38.444 18:53:12 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:38.444 18:53:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:38.702 18:53:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:38.702 18:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.702 18:53:13 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:39.269 18:53:13 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:39.269 18:53:13 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AYE9LhFEOw 00:27:39.269 18:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AYE9LhFEOw 00:27:39.269 18:53:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eDWAgfVc5O 00:27:39.269 18:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eDWAgfVc5O 00:27:39.528 18:53:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.528 18:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.786 nvme0n1 00:27:39.786 18:53:14 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:39.786 18:53:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:40.353 18:53:14 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:40.353 "subsystems": [ 00:27:40.353 { 00:27:40.353 "subsystem": "keyring", 00:27:40.353 "config": [ 00:27:40.353 { 00:27:40.353 "method": "keyring_file_add_key", 00:27:40.353 "params": { 00:27:40.353 "name": "key0", 00:27:40.353 "path": "/tmp/tmp.AYE9LhFEOw" 00:27:40.353 } 00:27:40.353 }, 00:27:40.353 { 00:27:40.353 "method": "keyring_file_add_key", 00:27:40.353 "params": { 00:27:40.353 "name": "key1", 00:27:40.353 "path": "/tmp/tmp.eDWAgfVc5O" 00:27:40.353 } 00:27:40.353 } 00:27:40.353 ] 00:27:40.353 }, 00:27:40.353 { 00:27:40.353 "subsystem": "iobuf", 00:27:40.353 "config": [ 00:27:40.353 { 00:27:40.353 "method": "iobuf_set_options", 00:27:40.353 "params": { 00:27:40.353 "large_bufsize": 135168, 00:27:40.353 "large_pool_count": 1024, 00:27:40.353 "small_bufsize": 8192, 00:27:40.353 "small_pool_count": 8192 00:27:40.353 } 00:27:40.353 } 00:27:40.353 ] 00:27:40.353 }, 00:27:40.353 { 00:27:40.353 "subsystem": "sock", 00:27:40.353 "config": [ 00:27:40.353 { 00:27:40.353 "method": "sock_set_default_impl", 00:27:40.353 "params": { 00:27:40.353 "impl_name": "posix" 00:27:40.353 } 00:27:40.353 }, 00:27:40.353 { 00:27:40.353 "method": "sock_impl_set_options", 00:27:40.353 "params": { 00:27:40.353 "enable_ktls": false, 00:27:40.353 "enable_placement_id": 0, 00:27:40.353 "enable_quickack": false, 00:27:40.353 "enable_recv_pipe": true, 00:27:40.353 "enable_zerocopy_send_client": false, 00:27:40.353 "enable_zerocopy_send_server": true, 00:27:40.353 "impl_name": "ssl", 00:27:40.353 "recv_buf_size": 4096, 00:27:40.353 "send_buf_size": 4096, 00:27:40.353 "tls_version": 0, 00:27:40.353 "zerocopy_threshold": 0 00:27:40.353 } 00:27:40.353 }, 00:27:40.353 { 00:27:40.353 "method": "sock_impl_set_options", 00:27:40.353 "params": { 00:27:40.353 "enable_ktls": false, 00:27:40.353 "enable_placement_id": 0, 00:27:40.353 "enable_quickack": false, 00:27:40.353 "enable_recv_pipe": true, 00:27:40.353 "enable_zerocopy_send_client": false, 00:27:40.353 "enable_zerocopy_send_server": true, 00:27:40.353 "impl_name": "posix", 00:27:40.353 "recv_buf_size": 2097152, 00:27:40.353 "send_buf_size": 2097152, 00:27:40.353 "tls_version": 0, 00:27:40.353 "zerocopy_threshold": 0 00:27:40.353 } 00:27:40.353 } 00:27:40.353 ] 00:27:40.353 }, 00:27:40.353 { 00:27:40.353 "subsystem": "vmd", 00:27:40.353 "config": [] 00:27:40.353 }, 00:27:40.353 { 00:27:40.353 "subsystem": "accel", 00:27:40.353 "config": [ 00:27:40.353 { 00:27:40.353 "method": "accel_set_options", 00:27:40.353 "params": { 00:27:40.353 "buf_count": 2048, 00:27:40.353 "large_cache_size": 16, 00:27:40.353 "sequence_count": 2048, 00:27:40.353 "small_cache_size": 128, 00:27:40.353 "task_count": 2048 00:27:40.353 } 00:27:40.353 } 00:27:40.354 ] 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "subsystem": "bdev", 00:27:40.354 "config": [ 00:27:40.354 { 00:27:40.354 "method": "bdev_set_options", 00:27:40.354 "params": { 00:27:40.354 "bdev_auto_examine": true, 00:27:40.354 "bdev_io_cache_size": 256, 00:27:40.354 "bdev_io_pool_size": 65535, 00:27:40.354 "iobuf_large_cache_size": 16, 00:27:40.354 "iobuf_small_cache_size": 128 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "bdev_raid_set_options", 00:27:40.354 "params": { 00:27:40.354 "process_window_size_kb": 1024 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "bdev_iscsi_set_options", 00:27:40.354 "params": { 00:27:40.354 "timeout_sec": 30 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "bdev_nvme_set_options", 00:27:40.354 "params": { 00:27:40.354 "action_on_timeout": "none", 00:27:40.354 "allow_accel_sequence": false, 00:27:40.354 "arbitration_burst": 0, 00:27:40.354 "bdev_retry_count": 3, 00:27:40.354 "ctrlr_loss_timeout_sec": 0, 00:27:40.354 "delay_cmd_submit": true, 00:27:40.354 "dhchap_dhgroups": [ 00:27:40.354 "null", 00:27:40.354 "ffdhe2048", 00:27:40.354 "ffdhe3072", 00:27:40.354 "ffdhe4096", 00:27:40.354 "ffdhe6144", 00:27:40.354 "ffdhe8192" 00:27:40.354 ], 00:27:40.354 "dhchap_digests": [ 00:27:40.354 "sha256", 00:27:40.354 "sha384", 00:27:40.354 "sha512" 00:27:40.354 ], 00:27:40.354 "disable_auto_failback": false, 00:27:40.354 "fast_io_fail_timeout_sec": 0, 00:27:40.354 "generate_uuids": false, 00:27:40.354 "high_priority_weight": 0, 00:27:40.354 "io_path_stat": false, 00:27:40.354 "io_queue_requests": 512, 00:27:40.354 "keep_alive_timeout_ms": 10000, 00:27:40.354 "low_priority_weight": 0, 00:27:40.354 "medium_priority_weight": 0, 00:27:40.354 "nvme_adminq_poll_period_us": 10000, 00:27:40.354 "nvme_error_stat": false, 00:27:40.354 "nvme_ioq_poll_period_us": 0, 00:27:40.354 "rdma_cm_event_timeout_ms": 0, 00:27:40.354 "rdma_max_cq_size": 0, 00:27:40.354 "rdma_srq_size": 0, 00:27:40.354 "reconnect_delay_sec": 0, 00:27:40.354 "timeout_admin_us": 0, 00:27:40.354 "timeout_us": 0, 00:27:40.354 "transport_ack_timeout": 0, 00:27:40.354 "transport_retry_count": 4, 00:27:40.354 "transport_tos": 0 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "bdev_nvme_attach_controller", 00:27:40.354 "params": { 00:27:40.354 "adrfam": "IPv4", 00:27:40.354 "ctrlr_loss_timeout_sec": 0, 00:27:40.354 "ddgst": false, 00:27:40.354 "fast_io_fail_timeout_sec": 0, 00:27:40.354 "hdgst": false, 00:27:40.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.354 "name": "nvme0", 00:27:40.354 "prchk_guard": false, 00:27:40.354 "prchk_reftag": false, 00:27:40.354 "psk": "key0", 00:27:40.354 "reconnect_delay_sec": 0, 00:27:40.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.354 "traddr": "127.0.0.1", 00:27:40.354 "trsvcid": "4420", 00:27:40.354 "trtype": "TCP" 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "bdev_nvme_set_hotplug", 00:27:40.354 "params": { 00:27:40.354 "enable": false, 00:27:40.354 "period_us": 100000 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "bdev_wait_for_examine" 00:27:40.354 } 00:27:40.354 ] 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "subsystem": "nbd", 00:27:40.354 "config": [] 00:27:40.354 } 00:27:40.354 ] 00:27:40.354 }' 00:27:40.354 18:53:14 keyring_file -- keyring/file.sh@114 -- # killprocess 100668 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100668 ']' 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100668 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100668 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:40.354 killing process with pid 100668 00:27:40.354 Received shutdown signal, test time was about 1.000000 seconds 00:27:40.354 00:27:40.354 Latency(us) 00:27:40.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.354 =================================================================================================================== 00:27:40.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100668' 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@967 -- # kill 100668 00:27:40.354 18:53:14 keyring_file -- common/autotest_common.sh@972 -- # wait 100668 00:27:40.354 18:53:14 keyring_file -- keyring/file.sh@117 -- # bperfpid=101135 00:27:40.354 18:53:14 keyring_file -- keyring/file.sh@119 -- # waitforlisten 101135 /var/tmp/bperf.sock 00:27:40.354 18:53:14 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:40.354 "subsystems": [ 00:27:40.354 { 00:27:40.354 "subsystem": "keyring", 00:27:40.354 "config": [ 00:27:40.354 { 00:27:40.354 "method": "keyring_file_add_key", 00:27:40.354 "params": { 00:27:40.354 "name": "key0", 00:27:40.354 "path": "/tmp/tmp.AYE9LhFEOw" 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "keyring_file_add_key", 00:27:40.354 "params": { 00:27:40.354 "name": "key1", 00:27:40.354 "path": "/tmp/tmp.eDWAgfVc5O" 00:27:40.354 } 00:27:40.354 } 00:27:40.354 ] 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "subsystem": "iobuf", 00:27:40.354 "config": [ 00:27:40.354 { 00:27:40.354 "method": "iobuf_set_options", 00:27:40.354 "params": { 00:27:40.354 "large_bufsize": 135168, 00:27:40.354 "large_pool_count": 1024, 00:27:40.354 "small_bufsize": 8192, 00:27:40.354 "small_pool_count": 8192 00:27:40.354 } 00:27:40.354 } 00:27:40.354 ] 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "subsystem": "sock", 00:27:40.354 "config": [ 00:27:40.354 { 00:27:40.354 "method": "sock_set_default_impl", 00:27:40.354 "params": { 00:27:40.354 "impl_name": "posix" 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "sock_impl_set_options", 00:27:40.354 "params": { 00:27:40.354 "enable_ktls": false, 00:27:40.354 "enable_placement_id": 0, 00:27:40.354 "enable_quickack": false, 00:27:40.354 "enable_recv_pipe": true, 00:27:40.354 "enable_zerocopy_send_client": false, 00:27:40.354 "enable_zerocopy_send_server": true, 00:27:40.354 "impl_name": "ssl", 00:27:40.354 "recv_buf_size": 4096, 00:27:40.354 "send_buf_size": 4096, 00:27:40.354 "tls_version": 0, 00:27:40.354 "zerocopy_threshold": 0 00:27:40.354 } 00:27:40.354 }, 00:27:40.354 { 00:27:40.354 "method": "sock_impl_set_options", 00:27:40.354 "params": { 00:27:40.354 "enable_ktls": false, 00:27:40.354 "enable_placement_id": 0, 00:27:40.354 "enable_quickack": false, 00:27:40.354 "enable_recv_pipe": true, 00:27:40.354 "enable_zerocopy_send_client": false, 00:27:40.354 "enable_zerocopy_send_server": true, 00:27:40.355 "impl_name": "posix", 00:27:40.355 "recv_buf_size": 2097152, 00:27:40.355 "send_buf_size": 2097152, 00:27:40.355 "tls_version": 0, 00:27:40.355 "zerocopy_threshold": 0 00:27:40.355 } 00:27:40.355 } 00:27:40.355 ] 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "subsystem": "vmd", 00:27:40.355 "config": [] 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "subsystem": "accel", 00:27:40.355 "config": [ 00:27:40.355 { 00:27:40.355 "method": "accel_set_options", 00:27:40.355 "params": { 00:27:40.355 "buf_count": 2048, 00:27:40.355 "large_cache_size": 16, 00:27:40.355 "sequence_count": 2048, 00:27:40.355 "small_cache_size": 128, 00:27:40.355 "task_count": 2048 00:27:40.355 } 00:27:40.355 } 00:27:40.355 ] 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "subsystem": "bdev", 00:27:40.355 "config": [ 00:27:40.355 { 00:27:40.355 "method": "bdev_set_options", 00:27:40.355 "params": { 00:27:40.355 "bdev_auto_examine": true, 00:27:40.355 "bdev_io_cache_size": 256, 00:27:40.355 "bdev_io_pool_size": 65535, 00:27:40.355 "iobuf_large_cache_size": 16, 00:27:40.355 "iobuf_small_cache_size": 128 00:27:40.355 } 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "method": "bdev_raid_set_options", 00:27:40.355 "params": { 00:27:40.355 "process_window_size_kb": 1024 00:27:40.355 } 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "method": "bdev_iscsi_set_options", 00:27:40.355 "params": { 00:27:40.355 "timeout_sec": 30 00:27:40.355 } 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "method": "bdev_nvme_set_options", 00:27:40.355 "params": { 00:27:40.355 "action_on_timeout": "none", 00:27:40.355 "allow_accel_sequence": false, 00:27:40.355 "arbitration_burst": 0, 00:27:40.355 "bdev_retry_count": 3, 00:27:40.355 "ctrlr_loss_timeout_sec": 0, 00:27:40.355 "delay_cmd_submit": true, 00:27:40.355 "dhchap_dhgroups": [ 00:27:40.355 "null", 00:27:40.355 "ffdhe2048", 00:27:40.355 "ffdhe3072", 00:27:40.355 "ffdhe4096", 00:27:40.355 "ffdhe6144", 00:27:40.355 "ffdhe8192" 00:27:40.355 ], 00:27:40.355 "dhchap_digests": [ 00:27:40.355 "sha256", 00:27:40.355 "sha384", 00:27:40.355 "sha512" 00:27:40.355 ], 00:27:40.355 "disable_auto_failback": false, 00:27:40.355 "fast_io_fail_timeout_sec": 0, 00:27:40.355 "generate_uuids": false, 00:27:40.355 "high_priority_weight": 0, 00:27:40.355 "io_path_stat": false, 00:27:40.355 "io_queue_requests": 512, 00:27:40.355 "keep_alive_timeout_ms": 10000, 00:27:40.355 "low_priority_weight": 0, 00:27:40.355 "medium_priority_weight": 0, 00:27:40.355 "nvme_adminq_poll_period_us": 10000, 00:27:40.355 "nvme_error_stat": false, 00:27:40.355 "nvme_ioq_poll_period_us": 0, 00:27:40.355 "rdma_cm_event_timeout_ms": 0, 00:27:40.355 "rdma_max_cq_size": 0, 00:27:40.355 "rdma_srq_size": 0, 00:27:40.355 "reconnect_delay_sec": 0, 00:27:40.355 "timeout_admin_us": 0, 00:27:40.355 "timeout_us": 0, 00:27:40.355 "transport_ack_timeout": 0, 00:27:40.355 "transport_retry_count": 4, 00:27:40.355 "transport_tos": 0 00:27:40.355 } 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "method": "bdev_nvme_attach_controller", 00:27:40.355 "params": { 00:27:40.355 "adrfam": "IPv4", 00:27:40.355 "ctrlr_loss_timeout_sec": 0, 00:27:40.355 "ddgst": false, 00:27:40.355 "fast_io_fail_timeout_sec": 0, 00:27:40.355 "hdgst": false, 00:27:40.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.355 "name": "nvme0", 00:27:40.355 "prchk_guard": false, 00:27:40.355 "prchk_reftag": false, 00:27:40.355 "psk": "key0", 00:27:40.355 "reconnect_delay_sec": 0, 00:27:40.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.355 "traddr": "127.0.0.1", 00:27:40.355 "trsvcid": "4420", 00:27:40.355 "trtype": "TCP" 00:27:40.355 } 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "method": "bdev_nvme_set_hotplug", 00:27:40.355 "params": { 00:27:40.355 "enable": false, 00:27:40.355 "period_us": 100000 00:27:40.355 } 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "method": "bdev_wait_for_examine" 00:27:40.355 } 00:27:40.355 ] 00:27:40.355 }, 00:27:40.355 { 00:27:40.355 "subsystem": "nbd", 00:27:40.355 "config": [] 00:27:40.355 } 00:27:40.355 ] 00:27:40.355 }' 00:27:40.355 18:53:14 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:40.355 18:53:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 101135 ']' 00:27:40.355 18:53:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.355 18:53:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:40.355 18:53:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.355 18:53:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:40.355 18:53:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:40.355 [2024-07-15 18:53:14.821090] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:40.355 [2024-07-15 18:53:14.821200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101135 ] 00:27:40.614 [2024-07-15 18:53:14.958244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.614 [2024-07-15 18:53:15.055205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.872 [2024-07-15 18:53:15.219281] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:41.459 18:53:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:41.459 18:53:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:41.459 18:53:15 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:41.459 18:53:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.459 18:53:15 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:41.755 18:53:15 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:41.755 18:53:15 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:41.755 18:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.755 18:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.755 18:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.755 18:53:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.755 18:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.755 18:53:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:41.755 18:53:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:41.755 18:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:41.755 18:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.755 18:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.755 18:53:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.755 18:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:42.015 18:53:16 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:42.274 18:53:16 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:42.274 18:53:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:42.274 18:53:16 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:42.274 18:53:16 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:42.274 18:53:16 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:42.274 18:53:16 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.AYE9LhFEOw /tmp/tmp.eDWAgfVc5O 00:27:42.274 18:53:16 keyring_file -- keyring/file.sh@20 -- # killprocess 101135 00:27:42.274 18:53:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 101135 ']' 00:27:42.274 18:53:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 101135 00:27:42.274 18:53:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:42.274 18:53:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.274 18:53:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101135 00:27:42.533 killing process with pid 101135 00:27:42.533 Received shutdown signal, test time was about 1.000000 seconds 00:27:42.533 00:27:42.533 Latency(us) 00:27:42.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.533 =================================================================================================================== 00:27:42.533 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101135' 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@967 -- # kill 101135 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@972 -- # wait 101135 00:27:42.533 18:53:16 keyring_file -- keyring/file.sh@21 -- # killprocess 100634 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100634 ']' 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100634 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100634 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:42.533 killing process with pid 100634 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100634' 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@967 -- # kill 100634 00:27:42.533 [2024-07-15 18:53:16.987726] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:42.533 18:53:16 keyring_file -- common/autotest_common.sh@972 -- # wait 100634 00:27:43.101 00:27:43.101 real 0m15.180s 00:27:43.101 user 0m36.825s 00:27:43.101 sys 0m3.683s 00:27:43.101 18:53:17 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.101 18:53:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:43.101 ************************************ 00:27:43.101 END TEST keyring_file 00:27:43.101 ************************************ 00:27:43.101 18:53:17 -- common/autotest_common.sh@1142 -- # return 0 00:27:43.101 18:53:17 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:43.101 18:53:17 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:27:43.101 18:53:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:43.101 18:53:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.101 18:53:17 -- common/autotest_common.sh@10 -- # set +x 00:27:43.101 ************************************ 00:27:43.101 START TEST keyring_linux 00:27:43.101 ************************************ 00:27:43.101 18:53:17 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:27:43.101 * Looking for test storage... 00:27:43.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:27:43.101 18:53:17 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:27:43.101 18:53:17 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:43.101 18:53:17 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:43.101 18:53:17 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6595a4fd-62c0-4385-bb15-2b50828eda08 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=6595a4fd-62c0-4385-bb15-2b50828eda08 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:43.102 18:53:17 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.102 18:53:17 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.102 18:53:17 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.102 18:53:17 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.102 18:53:17 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.102 18:53:17 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.102 18:53:17 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:43.102 18:53:17 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:43.102 /tmp/:spdk-test:key0 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:43.102 18:53:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:43.102 /tmp/:spdk-test:key1 00:27:43.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.102 18:53:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=101284 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:43.102 18:53:17 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 101284 00:27:43.102 18:53:17 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101284 ']' 00:27:43.102 18:53:17 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.102 18:53:17 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.102 18:53:17 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.102 18:53:17 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.102 18:53:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.360 [2024-07-15 18:53:17.612421] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:43.361 [2024-07-15 18:53:17.612733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101284 ] 00:27:43.361 [2024-07-15 18:53:17.747031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.619 [2024-07-15 18:53:17.851328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:44.188 18:53:18 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:44.188 [2024-07-15 18:53:18.504132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.188 null0 00:27:44.188 [2024-07-15 18:53:18.536104] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:44.188 [2024-07-15 18:53:18.536345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.188 18:53:18 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:44.188 28583550 00:27:44.188 18:53:18 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:44.188 132014893 00:27:44.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.188 18:53:18 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=101320 00:27:44.188 18:53:18 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:44.188 18:53:18 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 101320 /var/tmp/bperf.sock 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101320 ']' 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.188 18:53:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:44.188 [2024-07-15 18:53:18.620117] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:44.188 [2024-07-15 18:53:18.620520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101320 ] 00:27:44.447 [2024-07-15 18:53:18.765773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.447 [2024-07-15 18:53:18.883163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.379 18:53:19 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.379 18:53:19 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:45.379 18:53:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:45.379 18:53:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:45.379 18:53:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:45.379 18:53:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:45.637 18:53:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:45.637 18:53:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:45.896 [2024-07-15 18:53:20.303802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:46.159 nvme0n1 00:27:46.159 18:53:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:46.159 18:53:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:46.159 18:53:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:46.159 18:53:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:46.159 18:53:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.159 18:53:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:46.464 18:53:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.464 18:53:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.464 18:53:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@25 -- # sn=28583550 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@26 -- # [[ 28583550 == \2\8\5\8\3\5\5\0 ]] 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 28583550 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:46.464 18:53:20 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.721 Running I/O for 1 seconds... 00:27:47.661 00:27:47.661 Latency(us) 00:27:47.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.661 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:47.661 nvme0n1 : 1.01 14211.69 55.51 0.00 0.00 8959.86 7365.00 15666.22 00:27:47.661 =================================================================================================================== 00:27:47.661 Total : 14211.69 55.51 0.00 0.00 8959.86 7365.00 15666.22 00:27:47.661 0 00:27:47.661 18:53:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:47.661 18:53:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:47.919 18:53:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:47.919 18:53:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:47.919 18:53:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:47.919 18:53:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:47.919 18:53:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.919 18:53:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:48.177 18:53:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:48.177 18:53:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:48.177 18:53:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:48.177 18:53:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:48.177 18:53:22 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:48.177 18:53:22 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:48.177 18:53:22 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:48.177 18:53:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.177 18:53:22 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:48.177 18:53:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.177 18:53:22 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:48.177 18:53:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:48.435 [2024-07-15 18:53:22.859077] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:48.435 [2024-07-15 18:53:22.859709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130eea0 (107): Transport endpoint is not connected 00:27:48.435 [2024-07-15 18:53:22.860695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130eea0 (9): Bad file descriptor 00:27:48.435 [2024-07-15 18:53:22.861693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:48.435 [2024-07-15 18:53:22.861723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:48.435 [2024-07-15 18:53:22.861735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:48.435 2024/07/15 18:53:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:48.435 request: 00:27:48.435 { 00:27:48.435 "method": "bdev_nvme_attach_controller", 00:27:48.435 "params": { 00:27:48.435 "name": "nvme0", 00:27:48.435 "trtype": "tcp", 00:27:48.435 "traddr": "127.0.0.1", 00:27:48.435 "adrfam": "ipv4", 00:27:48.435 "trsvcid": "4420", 00:27:48.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:48.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:48.435 "prchk_reftag": false, 00:27:48.435 "prchk_guard": false, 00:27:48.435 "hdgst": false, 00:27:48.435 "ddgst": false, 00:27:48.435 "psk": ":spdk-test:key1" 00:27:48.435 } 00:27:48.435 } 00:27:48.435 Got JSON-RPC error response 00:27:48.435 GoRPCClient: error on JSON-RPC call 00:27:48.435 18:53:22 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:48.435 18:53:22 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:48.435 18:53:22 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:48.435 18:53:22 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:48.435 18:53:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:48.435 18:53:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:48.435 18:53:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:48.435 18:53:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:48.435 18:53:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:48.435 18:53:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:48.435 18:53:22 keyring_linux -- keyring/linux.sh@33 -- # sn=28583550 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 28583550 00:27:48.436 1 links removed 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@33 -- # sn=132014893 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 132014893 00:27:48.436 1 links removed 00:27:48.436 18:53:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 101320 00:27:48.436 18:53:22 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101320 ']' 00:27:48.436 18:53:22 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101320 00:27:48.436 18:53:22 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:48.436 18:53:22 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.436 18:53:22 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101320 00:27:48.694 18:53:22 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:48.694 18:53:22 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:48.694 killing process with pid 101320 00:27:48.694 18:53:22 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101320' 00:27:48.694 Received shutdown signal, test time was about 1.000000 seconds 00:27:48.694 00:27:48.694 Latency(us) 00:27:48.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.694 =================================================================================================================== 00:27:48.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.694 18:53:22 keyring_linux -- common/autotest_common.sh@967 -- # kill 101320 00:27:48.694 18:53:22 keyring_linux -- common/autotest_common.sh@972 -- # wait 101320 00:27:48.694 18:53:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 101284 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101284 ']' 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101284 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101284 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:48.694 killing process with pid 101284 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101284' 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@967 -- # kill 101284 00:27:48.694 18:53:23 keyring_linux -- common/autotest_common.sh@972 -- # wait 101284 00:27:49.266 ************************************ 00:27:49.266 END TEST keyring_linux 00:27:49.266 ************************************ 00:27:49.266 00:27:49.266 real 0m6.126s 00:27:49.266 user 0m11.653s 00:27:49.266 sys 0m1.784s 00:27:49.266 18:53:23 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:49.266 18:53:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:49.266 18:53:23 -- common/autotest_common.sh@1142 -- # return 0 00:27:49.266 18:53:23 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:49.266 18:53:23 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:49.266 18:53:23 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:49.266 18:53:23 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:49.266 18:53:23 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:49.266 18:53:23 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:49.266 18:53:23 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:49.266 18:53:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:49.266 18:53:23 -- common/autotest_common.sh@10 -- # set +x 00:27:49.266 18:53:23 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:49.266 18:53:23 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:49.266 18:53:23 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:49.266 18:53:23 -- common/autotest_common.sh@10 -- # set +x 00:27:51.168 INFO: APP EXITING 00:27:51.168 INFO: killing all VMs 00:27:51.168 INFO: killing vhost app 00:27:51.168 INFO: EXIT DONE 00:27:51.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:51.735 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:51.735 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:52.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:52.667 Cleaning 00:27:52.667 Removing: /var/run/dpdk/spdk0/config 00:27:52.667 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:52.667 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:52.667 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:52.667 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:52.667 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:52.667 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:52.667 Removing: /var/run/dpdk/spdk1/config 00:27:52.667 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:52.667 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:52.667 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:52.667 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:52.667 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:52.667 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:52.667 Removing: /var/run/dpdk/spdk2/config 00:27:52.667 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:52.667 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:52.667 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:52.667 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:52.667 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:52.667 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:52.667 Removing: /var/run/dpdk/spdk3/config 00:27:52.667 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:52.667 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:52.667 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:52.667 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:52.667 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:52.667 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:52.667 Removing: /var/run/dpdk/spdk4/config 00:27:52.667 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:52.668 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:52.668 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:52.668 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:52.668 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:52.668 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:52.668 Removing: /dev/shm/nvmf_trace.0 00:27:52.668 Removing: /dev/shm/spdk_tgt_trace.pid60655 00:27:52.668 Removing: /var/run/dpdk/spdk0 00:27:52.668 Removing: /var/run/dpdk/spdk1 00:27:52.668 Removing: /var/run/dpdk/spdk2 00:27:52.668 Removing: /var/run/dpdk/spdk3 00:27:52.668 Removing: /var/run/dpdk/spdk4 00:27:52.668 Removing: /var/run/dpdk/spdk_pid100145 00:27:52.668 Removing: /var/run/dpdk/spdk_pid100176 00:27:52.668 Removing: /var/run/dpdk/spdk_pid100207 00:27:52.668 Removing: /var/run/dpdk/spdk_pid100634 00:27:52.668 Removing: /var/run/dpdk/spdk_pid100668 00:27:52.668 Removing: /var/run/dpdk/spdk_pid101135 00:27:52.668 Removing: /var/run/dpdk/spdk_pid101284 00:27:52.668 Removing: /var/run/dpdk/spdk_pid101320 00:27:52.668 Removing: /var/run/dpdk/spdk_pid60510 00:27:52.668 Removing: /var/run/dpdk/spdk_pid60655 00:27:52.668 Removing: /var/run/dpdk/spdk_pid60927 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61014 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61054 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61163 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61180 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61300 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61573 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61750 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61832 00:27:52.668 Removing: /var/run/dpdk/spdk_pid61919 00:27:52.668 Removing: /var/run/dpdk/spdk_pid62014 00:27:52.668 Removing: /var/run/dpdk/spdk_pid62047 00:27:52.668 Removing: /var/run/dpdk/spdk_pid62082 00:27:52.668 Removing: /var/run/dpdk/spdk_pid62144 00:27:52.668 Removing: /var/run/dpdk/spdk_pid62261 00:27:52.668 Removing: /var/run/dpdk/spdk_pid62878 00:27:52.668 Removing: /var/run/dpdk/spdk_pid62942 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63012 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63040 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63130 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63158 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63244 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63271 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63328 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63358 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63409 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63439 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63586 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63623 00:27:52.668 Removing: /var/run/dpdk/spdk_pid63698 00:27:52.925 Removing: /var/run/dpdk/spdk_pid63773 00:27:52.925 Removing: /var/run/dpdk/spdk_pid63803 00:27:52.925 Removing: /var/run/dpdk/spdk_pid63861 00:27:52.925 Removing: /var/run/dpdk/spdk_pid63896 00:27:52.925 Removing: /var/run/dpdk/spdk_pid63936 00:27:52.925 Removing: /var/run/dpdk/spdk_pid63965 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64005 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64047 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64076 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64116 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64151 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64185 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64226 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64261 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64295 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64335 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64370 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64405 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64443 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64482 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64525 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64554 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64595 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64665 00:27:52.925 Removing: /var/run/dpdk/spdk_pid64776 00:27:52.925 Removing: /var/run/dpdk/spdk_pid65198 00:27:52.925 Removing: /var/run/dpdk/spdk_pid68588 00:27:52.925 Removing: /var/run/dpdk/spdk_pid68937 00:27:52.925 Removing: /var/run/dpdk/spdk_pid71400 00:27:52.925 Removing: /var/run/dpdk/spdk_pid71778 00:27:52.925 Removing: /var/run/dpdk/spdk_pid72037 00:27:52.925 Removing: /var/run/dpdk/spdk_pid72083 00:27:52.925 Removing: /var/run/dpdk/spdk_pid72709 00:27:52.925 Removing: /var/run/dpdk/spdk_pid73144 00:27:52.925 Removing: /var/run/dpdk/spdk_pid73194 00:27:52.925 Removing: /var/run/dpdk/spdk_pid73552 00:27:52.925 Removing: /var/run/dpdk/spdk_pid74080 00:27:52.925 Removing: /var/run/dpdk/spdk_pid74537 00:27:52.925 Removing: /var/run/dpdk/spdk_pid75508 00:27:52.925 Removing: /var/run/dpdk/spdk_pid76497 00:27:52.925 Removing: /var/run/dpdk/spdk_pid76608 00:27:52.925 Removing: /var/run/dpdk/spdk_pid76681 00:27:52.925 Removing: /var/run/dpdk/spdk_pid78144 00:27:52.925 Removing: /var/run/dpdk/spdk_pid78375 00:27:52.925 Removing: /var/run/dpdk/spdk_pid83701 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84148 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84256 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84408 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84453 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84499 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84539 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84697 00:27:52.925 Removing: /var/run/dpdk/spdk_pid84851 00:27:52.925 Removing: /var/run/dpdk/spdk_pid85116 00:27:52.925 Removing: /var/run/dpdk/spdk_pid85243 00:27:52.925 Removing: /var/run/dpdk/spdk_pid85490 00:27:52.925 Removing: /var/run/dpdk/spdk_pid85621 00:27:52.925 Removing: /var/run/dpdk/spdk_pid85750 00:27:52.925 Removing: /var/run/dpdk/spdk_pid86085 00:27:52.925 Removing: /var/run/dpdk/spdk_pid86507 00:27:52.925 Removing: /var/run/dpdk/spdk_pid86799 00:27:52.925 Removing: /var/run/dpdk/spdk_pid87298 00:27:52.925 Removing: /var/run/dpdk/spdk_pid87300 00:27:52.925 Removing: /var/run/dpdk/spdk_pid87648 00:27:52.925 Removing: /var/run/dpdk/spdk_pid87662 00:27:52.925 Removing: /var/run/dpdk/spdk_pid87677 00:27:52.925 Removing: /var/run/dpdk/spdk_pid87709 00:27:52.925 Removing: /var/run/dpdk/spdk_pid87714 00:27:52.925 Removing: /var/run/dpdk/spdk_pid88065 00:27:52.925 Removing: /var/run/dpdk/spdk_pid88112 00:27:52.925 Removing: /var/run/dpdk/spdk_pid88454 00:27:52.925 Removing: /var/run/dpdk/spdk_pid88700 00:27:52.925 Removing: /var/run/dpdk/spdk_pid89194 00:27:52.925 Removing: /var/run/dpdk/spdk_pid89783 00:27:52.925 Removing: /var/run/dpdk/spdk_pid91151 00:27:52.925 Removing: /var/run/dpdk/spdk_pid91743 00:27:52.925 Removing: /var/run/dpdk/spdk_pid91745 00:27:52.925 Removing: /var/run/dpdk/spdk_pid93686 00:27:52.925 Removing: /var/run/dpdk/spdk_pid93772 00:27:52.925 Removing: /var/run/dpdk/spdk_pid93861 00:27:52.925 Removing: /var/run/dpdk/spdk_pid93950 00:27:52.925 Removing: /var/run/dpdk/spdk_pid94109 00:27:52.925 Removing: /var/run/dpdk/spdk_pid94200 00:27:53.182 Removing: /var/run/dpdk/spdk_pid94285 00:27:53.182 Removing: /var/run/dpdk/spdk_pid94357 00:27:53.182 Removing: /var/run/dpdk/spdk_pid94707 00:27:53.182 Removing: /var/run/dpdk/spdk_pid95394 00:27:53.182 Removing: /var/run/dpdk/spdk_pid96748 00:27:53.182 Removing: /var/run/dpdk/spdk_pid96952 00:27:53.182 Removing: /var/run/dpdk/spdk_pid97243 00:27:53.182 Removing: /var/run/dpdk/spdk_pid97534 00:27:53.182 Removing: /var/run/dpdk/spdk_pid98099 00:27:53.182 Removing: /var/run/dpdk/spdk_pid98105 00:27:53.182 Removing: /var/run/dpdk/spdk_pid98468 00:27:53.182 Removing: /var/run/dpdk/spdk_pid98627 00:27:53.182 Removing: /var/run/dpdk/spdk_pid98784 00:27:53.182 Removing: /var/run/dpdk/spdk_pid98881 00:27:53.182 Removing: /var/run/dpdk/spdk_pid99041 00:27:53.182 Removing: /var/run/dpdk/spdk_pid99150 00:27:53.182 Removing: /var/run/dpdk/spdk_pid99819 00:27:53.182 Removing: /var/run/dpdk/spdk_pid99850 00:27:53.182 Removing: /var/run/dpdk/spdk_pid99891 00:27:53.182 Clean 00:27:53.182 18:53:27 -- common/autotest_common.sh@1451 -- # return 0 00:27:53.182 18:53:27 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:53.182 18:53:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.182 18:53:27 -- common/autotest_common.sh@10 -- # set +x 00:27:53.182 18:53:27 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:53.182 18:53:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.182 18:53:27 -- common/autotest_common.sh@10 -- # set +x 00:27:53.182 18:53:27 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:53.182 18:53:27 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:53.182 18:53:27 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:53.182 18:53:27 -- spdk/autotest.sh@391 -- # hash lcov 00:27:53.182 18:53:27 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:53.182 18:53:27 -- spdk/autotest.sh@393 -- # hostname 00:27:53.182 18:53:27 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:53.440 geninfo: WARNING: invalid characters removed from testname! 00:28:19.970 18:53:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:20.535 18:53:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:23.061 18:53:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:24.964 18:53:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:26.936 18:54:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:29.496 18:54:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:31.414 18:54:05 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:31.414 18:54:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:31.414 18:54:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:31.414 18:54:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.414 18:54:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.414 18:54:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.414 18:54:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.414 18:54:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.414 18:54:05 -- paths/export.sh@5 -- $ export PATH 00:28:31.414 18:54:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.414 18:54:05 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:31.414 18:54:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:31.414 18:54:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069645.XXXXXX 00:28:31.414 18:54:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069645.vuJEhK 00:28:31.414 18:54:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:31.414 18:54:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:31.414 18:54:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:31.414 18:54:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:31.414 18:54:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:31.414 18:54:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:31.414 18:54:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:31.414 18:54:05 -- common/autotest_common.sh@10 -- $ set +x 00:28:31.414 18:54:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:28:31.414 18:54:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:31.414 18:54:05 -- pm/common@17 -- $ local monitor 00:28:31.414 18:54:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:31.414 18:54:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:31.414 18:54:05 -- pm/common@25 -- $ sleep 1 00:28:31.414 18:54:05 -- pm/common@21 -- $ date +%s 00:28:31.414 18:54:05 -- pm/common@21 -- $ date +%s 00:28:31.414 18:54:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721069645 00:28:31.414 18:54:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721069645 00:28:31.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721069645_collect-vmstat.pm.log 00:28:31.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721069645_collect-cpu-load.pm.log 00:28:32.613 18:54:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:32.613 18:54:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:32.613 18:54:06 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:32.613 18:54:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:32.613 18:54:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:32.613 18:54:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:32.614 18:54:06 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:32.614 18:54:06 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:32.614 18:54:06 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:32.614 18:54:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:32.614 18:54:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:32.614 18:54:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:32.614 18:54:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:32.614 18:54:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:32.614 18:54:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:32.614 18:54:06 -- pm/common@44 -- $ pid=103023 00:28:32.614 18:54:06 -- pm/common@50 -- $ kill -TERM 103023 00:28:32.614 18:54:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:32.614 18:54:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:32.614 18:54:06 -- pm/common@44 -- $ pid=103025 00:28:32.614 18:54:06 -- pm/common@50 -- $ kill -TERM 103025 00:28:32.614 + [[ -n 5164 ]] 00:28:32.614 + sudo kill 5164 00:28:32.621 [Pipeline] } 00:28:32.634 [Pipeline] // timeout 00:28:32.638 [Pipeline] } 00:28:32.651 [Pipeline] // stage 00:28:32.655 [Pipeline] } 00:28:32.667 [Pipeline] // catchError 00:28:32.673 [Pipeline] stage 00:28:32.675 [Pipeline] { (Stop VM) 00:28:32.685 [Pipeline] sh 00:28:32.964 + vagrant halt 00:28:37.145 ==> default: Halting domain... 00:28:43.714 [Pipeline] sh 00:28:43.993 + vagrant destroy -f 00:28:48.180 ==> default: Removing domain... 00:28:48.192 [Pipeline] sh 00:28:48.535 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:48.544 [Pipeline] } 00:28:48.561 [Pipeline] // stage 00:28:48.567 [Pipeline] } 00:28:48.583 [Pipeline] // dir 00:28:48.588 [Pipeline] } 00:28:48.607 [Pipeline] // wrap 00:28:48.613 [Pipeline] } 00:28:48.631 [Pipeline] // catchError 00:28:48.640 [Pipeline] stage 00:28:48.642 [Pipeline] { (Epilogue) 00:28:48.657 [Pipeline] sh 00:28:48.940 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:55.531 [Pipeline] catchError 00:28:55.533 [Pipeline] { 00:28:55.547 [Pipeline] sh 00:28:55.828 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:56.120 Artifacts sizes are good 00:28:56.140 [Pipeline] } 00:28:56.158 [Pipeline] // catchError 00:28:56.169 [Pipeline] archiveArtifacts 00:28:56.175 Archiving artifacts 00:28:56.357 [Pipeline] cleanWs 00:28:56.369 [WS-CLEANUP] Deleting project workspace... 00:28:56.369 [WS-CLEANUP] Deferred wipeout is used... 00:28:56.375 [WS-CLEANUP] done 00:28:56.377 [Pipeline] } 00:28:56.396 [Pipeline] // stage 00:28:56.402 [Pipeline] } 00:28:56.419 [Pipeline] // node 00:28:56.425 [Pipeline] End of Pipeline 00:28:56.462 Finished: SUCCESS